WorldWideScience

Sample records for digital fundus camera

  1. Diabetic Retinopathy Screening Ratio Is Improved When Using a Digital, Nonmydriatic Fundus Camera Onsite in a Diabetes Outpatient Clinic

    Directory of Open Access Journals (Sweden)

    Pia Roser

    2016-01-01

    Full Text Available Objective. To evaluate the effect of onsite screening with a nonmydriatic, digital fundus camera for diabetic retinopathy (DR at a diabetes outpatient clinic. Research Design and Methods. This cross-sectional study included 502 patients, 112 with type 1 and 390 with type 2 diabetes. Patients attended screenings for microvascular complications, including diabetic nephropathy (DN, diabetic polyneuropathy (DP, and DR. Single-field retinal imaging with a digital, nonmydriatic fundus camera was used to assess DR. Prevalence and incidence of microvascular complications were analyzed and the ratio of newly diagnosed to preexisting complications for all entities was calculated in order to differentiate natural progress from missed DRs. Results. For both types of diabetes, prevalence of DR was 25.0% (n=126 and incidence 6.4% (n=32 (T1DM versus T2DM: prevalence: 35.7% versus 22.1%, incidence 5.4% versus 6.7%. 25.4% of all DRs were newly diagnosed. Furthermore, the ratio of newly diagnosed to preexisting DR was higher than those for DN (p=0.12 and DP (p=0.03 representing at least 13 patients with missed DR. Conclusions. The results indicate that implementing nonmydriatic, digital fundus imaging in a diabetes outpatient clinic can contribute to improved early diagnosis of diabetic retinopathy.

  2. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  3. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    Science.gov (United States)

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p camera for applications and clinical trials requiring 7-field stereo photography.

  4. Do it yourself smartphone fundus camera – DIYretCAM

    Directory of Open Access Journals (Sweden)

    Biju Raju

    2016-01-01

    Full Text Available This article describes the method to make a do it yourself smartphone-based fundus camera which can image the central retina as well as the peripheral retina up to the pars plana. It is a cost-effective alternative to the fundus camera.

  5. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  6. Screening for diabetic retinopathy in rural area using single-field, digital fundus images.

    Science.gov (United States)

    Ruamviboonsuk, Paisan; Wongcumchang, Nattapon; Surawongsin, Pattamaporn; Panyawatananukul, Ekchai; Tiensuwan, Montip

    2005-02-01

    To evaluate the practicability of using single-field, 2.3 million-pixel, digital fundus images for screening of diabetic retinopathy in rural areas. All diabetic patients who regularly attended the diabetic clinic at Kabcheang Community Hospital, located at 15 kilometers from the Thailand-Cambodia border, were appointed to the hospital for a 3-day diabetic retinopathy screening programme. The fundi of all patients were captured in single-field, 45 degrees, 2.3 million-pixel images using nonmydriatic digital fundus camera and then sent to a reading center in Bangkok. The fundi were also examined through dilated pupils by a retinal specialist at this hospital. The grading of diabetic retinopathy from two methods was compared for an exact agreement. The average duration of single digital fundus image capture was 2 minutes. The average file size of each image was 750 kilobytes. The average duration of single image transmission to a reading center in Bangkok via satellite was 3 minutes; via a conventional telephone line was 8 minutes. Of all 150 patients, 130 were assessed for an agreement between dilated fundus examination and digital fundus images in diagnosis of diabetic retinopathy. The exact agreement was 0.87, the weighted kappa statistics was 0.74. The sensitivity of digital fundus images in detecting diabetic retinopathy was 80%, the specificity was 96%. For diabetic macular edema the exact agreement was 0.97, the weighted kappa was 0.43, the sensitivity was 43%, and the specificity was 100%. The image capture of the nonmydriatic digital fundus camera is suitable for screening of diabetic retinopathy and single-field digital fundus images are potentially acceptable tools for the screening. The real-time image transmission via telephone lines to remote reading center, however, may not be practical for routine diabetic retinopathy screening in rural areas.

  7. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  8. Image quality characteristics of a novel colour scanning digital ophthalmoscope (SDO) compared with fundus photography.

    Science.gov (United States)

    Strauss, Rupert W; Krieglstein, Tina R; Priglinger, Siegfried G; Reis, Werner; Ulbig, Michael W; Kampik, Anselm; Neubauer, Aljoscha S

    2007-11-01

    To establish a set of quality parameters for grading image quality and apply those to evaluate the fundus image quality obtained by a new scanning digital ophthalmoscope (SDO) compared with standard slide photography. On visual analogue scales a total of eight image characteristics were defined: overall quality, contrast, colour brilliance, focus (sharpness), resolution and details, noise, artefacts and validity of clinical assessment. Grading was repeated after 4 months to assess repeatability. Fundus images of 23 patients imaged digitally by SDO and by Zeiss 450FF fundus camera using Kodak film were graded side-by-side by three graders. Lens opacity was quantified with the Interzeag Lens Opacity Meter 701. For all of the eight scales of image quality, good repeatability within the graders (mean Kendall's W 0.69) was obtained after 4 months. Inter-grader agreement ranged between 0.31 and 0.66. Despite the SDO's limited nominal image resolution of 720 x 576 pixels, the Zeiss FF 450 camera performed better in only two of the subscales - noise (p = 0.001) and artefacts (p = 0.01). Lens opacities significantly influenced only the two subscales 'resolution' and 'details', which deteriorated with increasing media opacities for both imaging systems. Distinct scales to grade image characteristics of different origin were developed and validated. Overall SDO digital imaging was found to provide fundus pictures of a similarly high level of quality as expert photography on slides.

  9. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    Science.gov (United States)

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  10. Realization of the ergonomics design and automatic control of the fundus cameras

    Science.gov (United States)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  11. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  12. Telemedicine for diabetic retinopathy screening using an ultra-widefield fundus camera

    Directory of Open Access Journals (Sweden)

    Hussain N

    2017-08-01

    Full Text Available Nazimul Hussain,1 Maryam Edraki,2 Rima Tahhan,2 Nishanth Sanalkumar,2 Sami Kenz,2 Nagwa Khalil Akasha,2 Brian Mtemererwa,2 Nahed Mohammed2 1Department of Ophthalmology, Al Zahra Hospital, Sharjah, United Arab Emirates; 2Department of Endocrinology, Al Zahra Hospital, Sharjah, United Arab Emirates Objective: Telemedicine reporting of diabetic retinopathy (DR screening using ultra-widefield (UWF fundus camera. Materials and methods: Cross-sectional study of diabetic patients who visited the endocrinology department of a private multi-specialty hospital in United Arab Emirates between April 2015 and January 2017 who underwent UWF fundus imaging. Fundus pictures are then accessed at the Retina Clinic in the Department of Ophthalmology. Primary outcome measure was incidence of any form of DR detected. The secondary outcome measure was failure to take good image and inability to grade. Results: A total of 1,024 diabetic individuals were screened for DR from April 2015 to January 2017 in the department of Endocrinology. Rate of DR was 9.27%; 165 eyes of 95 individuals were diagnosed to have some form of DR. Mild non-proliferative DR (NPDR was seen in 114 of 165 eyes (69.09%, moderate NPDR in 32 eyes (19.39%, severe NPDR in six eyes (3.64%, and proliferative DR (PDR in 13 eyes (7.88%. The secondary outcome measure of poor image acquisition was seen in one individual who had an image acquired in one eye that could not be graded due to bad picture quality. Conclusions: The present study has shown the effectiveness of DR screening using UWF fundus camera. It has shown the effectiveness of trained nursing personnel taking fundus images. This model can be replicated in any private multi-specialty hospital and reduce the burden of DR screening in the retina clinic and enhance early detection of treatable DR. Keywords: telemedicine, ultra-widefield camera, diabetic retinopathy screening

  13. The sensitivity and specificity of one field non-mydriatic digital fundus photography for DR screening

    Directory of Open Access Journals (Sweden)

    Bin-Bin Li

    2013-07-01

    Full Text Available AIM:To evaluate the sensitivity and specificity of one-field non-mydriatic digital fundus photography and direct ophthalmoscopy for diabetic retinopathy(DRscreening, compared with fundus fluorescein angiography( FFA .METHODS:All 93 patients of type 1 or 2 diabetic who have underwent one-field non-mydriatic digital fundus photography, and direct ophthalmoscopy with dilation of their pupils, and FFA by ophthalmologists. The sensitivity and specificity of one-field non-mydriatic digital fundus photography and direct ophthalmoscopy were calculated respectively, compared with FFA.RESULTS: The sensitivity and specificity of one-field non-mydriatic digital fundus photography for detection of any DR were 80.4% and 94.7%; The sensitivity and specificity of direct ophthalmoscopy for detection of any DR were 64.2% and 84.2%; After the standard for referable DR being lowered down to the moderate non-proliferative diabetic retinopathy(M-NPDR, the sensitivity and specificity of non-mydriatic digital fundus photography for detection were 88.9% and 98.4%, the sensitivity and specificity of direct ophthalmoscopy for detection were 71.5% and 96.7%.CONCLUSION: One-field non-mydriatic digital fundus photography is an effective method for DR screening.

  14. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  15. Strategies of digital fundus photography for screening diabetic retinopathy in a diabetic population in urban China.

    Science.gov (United States)

    Ding, Jiyuan; Zou, Yanhong; Liu, Ningpu; Jiang, Li; Ren, Xuetao; Jia, Wei; Snellingen, Torkel; Chongsuvivatwong, Virasakdi; Liu, Xipu

    2012-12-01

    To evaluate the effect of mydriasis and different field strategies on technical failure, probability to refer diabetic retinopathy (DR, sensitivity) and probability not to refer patients without DR (specificity) of digital photography in screening with a fundus camera. A total of 531 patients with diabetes underwent fundus photography with cross-combinations of mydriasis/nonmydriasis and single-field/two-field strategies, followed by slit lamp biomicroscopic examination by a trained ophthalmologist. Fundus photographs were graded independently by another experienced ophthalmologist. Calculations were first based on cases with non-gradable images treated as being referred and then with them excluded. Percentages of DR and referable DR in this patient cohort were 22.4% and 7.7%, respectively, based on slit lamp biomicroscopic examination. Mydriasis significantly reduced the technical failure rate from 27.1% to 8.3% under a single-field strategy, and from 28.2% to 8.9% under a two-field strategy. As compared to the single-field strategy, the two-field strategy increased sensitivity from 75.6% to 87.8% without mydriasis and from 73.2% to 90.2% with mydriasis. Mydriasis increased specificity from 68.8% to 84.3% in the single-field strategy and from 64.7% to 81.6% in the two-field strategy. Had the subjects with non-gradable images been excluded, the two-field strategy without mydriasis reported sensitivity of 85.7% and specificity of 91.6%. Both mydriasis and the two-field strategy are useful in photographic screening tests. Technical failure should be taken into consideration when screening strategies for DR are determined.

  16. Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography.

    Science.gov (United States)

    Wang, Benquan; Toslak, Devrim; Alam, Minhaj Nur; Chan, R V Paul; Yao, Xincheng

    2018-06-08

    In conventional fundus photography, trans-pupillary illumination delivers illuminating light to the interior of the eye through the peripheral area of the pupil, and only the central part of the pupil can be used for collecting imaging light. Therefore, the field of view of conventional fundus cameras is limited, and pupil dilation is required for evaluating the retinal periphery which is frequently affected by diabetic retinopathy (DR), retinopathy of prematurity (ROP), and other chorioretinal conditions. We report here a nonmydriatic wide field fundus camera employing trans-pars-planar illumination which delivers illuminating light through the pars plana, an area outside of the pupil. Trans-pars-planar illumination frees the entire pupil for imaging purpose only, and thus wide field fundus photography can be readily achieved with less pupil dilation. For proof-of-concept testing, using all off-the-shelf components a prototype instrument that can achieve 90° fundus view coverage in single-shot fundus images, without the need of pharmacologic pupil dilation was demonstrated.

  17. Fundus Autofluorescence Captured With a Nonmydriatic Retinal Camera in Vegetarians Versus Nonvegetarians.

    Science.gov (United States)

    Kommana, Sumana S; Padgaonkar, Pooja; Mendez, Nicole; Wu, Lesley; Szirth, Bernard; Khouri, Albert S

    2015-09-09

    A baseline level of lipofuscin in the retinal pigment epithelium (RPE) is inevitable with age, but increased levels due to increased oxidative stress can result in deleterious vision loss at older ages. As earlier detection of differences in levels can lead to superior preventative management, we studied the relationship between lipofuscin accumulation and dietary lifestyle (vegetarian vs. nonvegetarian) in the younger, healthy South Asian population using retinal fundus autofluorescence (FAF) imaging. In this pilot study, we examined 37 healthy subjects (average age 23 years ± 1) all undergoing similar stress levels as medical students at Rutgers New Jersey Medical School. Levels of lipofuscin concentrations were imaged using a FAF retinal camera (Canon CX-1). Two images (color and FAF) were captured of the left eye and included in the analysis. FAF quantitative scoring was measured in 2 regions of the captured image, the papillo-macular region (P) and the macula (M), by determining the grayscale score of a 35.5 mm(2) rectangle in the respective regions. Standardized scores (corrected to remove baseline fluorescence) were then obtained. Means, standard deviations, and t tests were performed for comparisons. Fundus autofluorescence scores of regions P and M were significantly different (P vegetarians had statistically significant lower levels of autofluorescence. These findings can have potential implications regarding long-term retinal health and risk for developing certain diseases over decades in subjects at risk for vision-threatening diseases. © 2015 Diabetes Technology Society.

  18. Modification of a Kowa RC-2 fundus camera for self-photography without the use of mydriatics.

    Science.gov (United States)

    Philpott, D E; Bailey, P F; Harrison, G; Turnbill, C

    1979-01-01

    Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.

  19. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  20. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  1. A Web-based telemedicine system for diabetic retinopathy screening using digital fundus photography.

    Science.gov (United States)

    Wei, Jack C; Valentino, Daniel J; Bell, Douglas S; Baker, Richard S

    2006-02-01

    The purpose was to design and implement a Web-based telemedicine system for diabetic retinopathy screening using digital fundus cameras and to make the software publicly available through Open Source release. The process of retinal imaging and case reviewing was modeled to optimize workflow and implement use of computer system. The Web-based system was built on Java Servlet and Java Server Pages (JSP) technologies. Apache Tomcat was chosen as the JSP engine, while MySQL was used as the main database and Laboratory of Neuro Imaging (LONI) Image Storage Architecture, from the LONI-UCLA, as the platform for image storage. For security, all data transmissions were carried over encrypted Internet connections such as Secure Socket Layer (SSL) and HyperText Transfer Protocol over SSL (HTTPS). User logins were required and access to patient data was logged for auditing. The system was deployed at Hubert H. Humphrey Comprehensive Health Center and Martin Luther King/Drew Medical Center of Los Angeles County Department of Health Services. Within 4 months, 1500 images of more than 650 patients were taken at Humphrey's Eye Clinic and successfully transferred to King/Drew's Department of Ophthalmology. This study demonstrates an effective architecture for remote diabetic retinopathy screening.

  2. Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs.

    NARCIS (Netherlands)

    Niemeijer, M.; Ginneken, B. van; Cree, M.J.; Mizutani, A.; Quellec, G.; Sanchez, C.I.; Zhang, B.; Hornero, R.; Lamard, M.; Muramatsu, C.; Wu, X.; Cazuguel, G.; You, J.; Mayo, A.; Li, Q.; Hatanaka, Y.; Cochener, B.; Roux, C.; Karray, F.; Garcia, M.; Fujita, H.; Abramoff, M.D.

    2010-01-01

    The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each

  3. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  4. Predictors for the progression of geographic atrophy in patients with age-related macular degeneration: fundus autofluorescence study with modified fundus camera.

    Science.gov (United States)

    Jeong, Y J; Hong, I H; Chung, J K; Kim, K L; Kim, H K; Park, S P

    2014-02-01

    We examined the association between abnormal fundus autofluorescence (FAF) features on images obtained by a modified fundus camera (mFC) and geographic atrophy (GA) progression in patients with age-related macular degeneration (AMD). Serial FAF images of 131 eyes from 131 patients with GA were included in the study. All FAF images were obtained with an mFC (excitation, ∼ 500-610 nm; emission, ∼ 675-715 nm). The GA area was quantified at baseline and 1 year later using a customized segmentation program. The yearly GA enlargement rate was then calculated. Abnormal FAF patterns in the junctional zone of GA were classified as None or Minimal change, Focal, Patchy, Banded, or Diffuse according to previously published classification based on confocal scanning laser ophthalmoscopy (cSLO). The relationship between GA enlargement and abnormal FAF was evaluated. The mean rate of GA enlargement was the fastest in eyes with Diffuse pattern (1.74 mm(2) per year), followed by eyes with the Banded pattern (1.69 mm(2) per year). Binary logistic regression analysis revealed that eyes with the Banded and Diffuse pattern had significantly higher risk for GA enlargement compared with eyes with the other patterns. FAF image obtained by mFC appears to be acceptable for evaluating GA in accordance with an established cSLO-based classification. Eyes with the Banded or the Diffuse patterns of abnormal FAF at baseline indicate a high risk for GA progression. Identifying patients at high risk for GA progression using an mFC is broadly available method that can provide additional information to help predict disease course.

  5. [Evaluation of diabetic retinopathy screening using non-mydriatic fundus camera performed by physicians' assistants in the endocrinology service].

    Science.gov (United States)

    Barcatali, M-G; Denion, E; Miocque, S; Reznik, Y; Joubert, M; Morera, J; Rod, A; Mouriaux, F

    2015-04-01

    Since 2010, the High Authority for health (HAS) recommends the use of non-mydriatic fundus camera for diabetic retinopathy screening. The purpose of this study is to evaluate the results of screening for diabetic retinopathy using the non-mydriatic retinal camera by a physician's assistant in the endocrinology service. This is a retrospective study of all diabetic patients hospitalized in the endocrinology department between May 2013 and November 2013. For each endocrinology patient requiring screening, a previously trained physician's assistant performed fundus photos. The ophthalmologist then provided a written interpretation of the photos on a consultant's sheet. Of the 120 patients screened, 40 (33.3%) patients had uninterpretable photos. Among the 80 interpretable photos, 64 (53.4%) patients had no diabetic retinopathy, and 16 (13.3%) had diabetic retinopathy. No patient had diabetic maculopathy. Specific quality criteria were established by the HAS for screening for diabetic retinopathy using the non-mydriatic retinal camera in order to ensure sufficient sensitivity and specificity. In our study, the two quality criteria were not achieved: the rates of uninterpretable photos and the total number of photos analyzed in a given period. In our center, we discontinued this method of diabetic retinopathy screening due to the high rate of uninterpretable photos. Due to the logistic impossibility of the ophthalmologists taking all the fundus photos, we proposed that the ophthalmic nurses take the photos. They are better trained in the use of the equipment, and can confer directly with an ophthalmologist in questionable cases and to obtain pupil dilation as necessary. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. Digital holography using a digital photo-camera

    Czech Academy of Sciences Publication Activity Database

    Sekanina, H.; Pospíšil, Jaroslav

    2002-01-01

    Roč. 49, č. 13 (2002), s. 2083-2092 ISSN 0950-0340 Institutional research plan: CEZ:AV0Z1010921 Keywords : digital holography * photo-camera Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.717, year: 2002

  7. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  8. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  9. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  10. Color correction pipeline optimization for digital cameras

    Science.gov (United States)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  11. Modification of a Kowa RC-2 fundus camera for self-photography without the use of mydriatics. [for blood vessel monitoring during space flight

    Science.gov (United States)

    Philpott, D. E.; Harrison, G.; Turnbill, C.; Bailey, P. F.

    1979-01-01

    Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark-adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.

  12. Fundus Photography in the 21st Century--A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare.

    Science.gov (United States)

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han; Agrawal, Rupesh

    2016-03-01

    The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available.

  13. Detailed Morphological Changes of Foveoschisis in Patient with X-Linked Retinoschisis Detected by SD-OCT and Adaptive Optics Fundus Camera

    Directory of Open Access Journals (Sweden)

    Keiichiro Akeo

    2015-01-01

    Full Text Available Purpose. To report the morphological and functional changes associated with a regression of foveoschisis in a patient with X-linked retinoschisis (XLRS. Methods. A 42-year-old man with XLRS underwent genetic analysis and detailed ophthalmic examinations. Functional assessments included best-corrected visual acuity (BCVA, full-field electroretinograms (ERGs, and multifocal ERGs (mfERGs. Morphological assessments included fundus photography, spectral-domain optical coherence tomography (SD-OCT, and adaptive optics (AO fundus imaging. After the baseline clinical data were obtained, topical dorzolamide was applied to the patient. The patient was followed for 24 months. Results. A reported RS1 gene mutation was found (P203L in the patient. At the baseline, his decimal BCVA was 0.15 in the right and 0.3 in the left eye. Fundus photographs showed bilateral spoke wheel-appearing maculopathy. SD-OCT confirmed the foveoschisis in the left eye. The AO images of the left eye showed spoke wheel retinal folds, and the folds were thinner than those in fundus photographs. During the follow-up period, the foveal thickness in the SD-OCT images and the number of retinal folds in the AO images were reduced. Conclusions. We have presented the detailed morphological changes of foveoschisis in a patient with XLRS detected by SD-OCT and AO fundus camera. However, the findings do not indicate whether the changes were influenced by topical dorzolamide or the natural history.

  14. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  15. Digital dental photography. Part 4: choosing a camera.

    Science.gov (United States)

    Ahmad, I

    2009-06-13

    With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.

  16. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    Science.gov (United States)

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  17. Comparative study of the polaroid and digital non-mydriatic cameras in the detection of referrable diabetic retinopathy in Australia.

    Science.gov (United States)

    Phiri, R; Keeffe, J E; Harper, C A; Taylor, H R

    2006-08-01

    To show that the non-mydriatic retinal camera (NMRC) using polaroid film is as effective as the NMRC using digital imaging in detecting referrable retinopathy. A series of patients with diabetes attending the eye out-patients department at the Royal Victorian Eye and Ear Hospital had single-field non-mydriatic fundus photographs taken using first a digital and then a polaroid camera. Dilated 30 degrees seven-field stereo fundus photographs were then taken of each eye as the gold standard. The photographs were graded in a masked fashion. Retinopathy levels were defined using the simplified Wisconsin Grading system. We used the kappa statistics for inter-reader and intrareader agreement and the generalized linear model to derive the odds ratio. There were 196 participants giving 325 undilated retinal photographs. Of these participants 111 (57%) were males. The mean age of the patients was 68.8 years. There were 298 eyes with all three sets of photographs from 154 patients. The digital NMRC had a sensitivity of 86.2%[95% confidence interval (CI) 65.8, 95.3], whilst the polaroid NMRC had a sensitivity of 84.1% (95% CI 65.5, 93.7). The specificities of the two cameras were identical at 71.2% (95% CI 58.8, 81.1). There was no difference in the ability of the polaroid and digital camera to detect referrable retinopathy (odds ratio 1.06, 95% CI 0.80, 1.40, P = 0.68). This study suggests that non-mydriatic retinal photography using polaroid film is as effective as digital imaging in the detection of referrable retinopathy in countries such as the USA and Australia or others that use the same criterion for referral.

  18. Compact Laser Doppler Flowmeter (LDF Fundus Camera for the Assessment of Retinal Blood Perfusion in Small Animals.

    Directory of Open Access Journals (Sweden)

    Marielle Mentek

    Full Text Available Noninvasive techniques for ocular blood perfusion assessment are of crucial importance for exploring microvascular alterations related to systemic and ocular diseases. However, few techniques adapted to rodents are available and most are invasive or not specifically focused on the optic nerve head (ONH, choroid or retinal circulation. Here we present the results obtained with a new rodent-adapted compact fundus camera based on laser Doppler flowmetry (LDF.A confocal miniature flowmeter was fixed to a specially designed 3D rotating mechanical arm and adjusted on a rodent stereotaxic table in order to accurately point the laser beam at the retinal region of interest. The linearity of the LDF measurements was assessed using a rotating Teflon wheel and a flow of microspheres in a glass capillary. In vivo reproducibility was assessed in Wistar rats with repeated measurements (inter-session and inter-day of retinal arteries and ONH blood velocity in six and ten rats, respectively. These parameters were also recorded during an acute intraocular pressure increase to 150 mmHg and after heart arrest (n = 5 rats.The perfusion measurements showed perfect linearity between LDF velocity and Teflon wheel or microsphere speed. Intraclass correlation coefficients for retinal arteries and ONH velocity (0.82 and 0.86, respectively indicated strong inter-session repeatability and stability. Inter-day reproducibility was good (0.79 and 0.7, respectively. Upon ocular blood flow cessation, the retinal artery velocity signal substantially decreased, whereas the ONH signal did not significantly vary, suggesting that it could mostly be attributed to tissue light scattering.We have demonstrated that, while not adapted for ONH blood perfusion assessment, this device allows pertinent, stable and repeatable measurements of retinal blood perfusion in rats.

  19. The Nonmydriatic Fundus Camera in Diabetic Retinopathy Screening: A Cost-Effective Study with Evaluation for Future Large-Scale Application

    Directory of Open Access Journals (Sweden)

    Giuseppe Scarpa

    2016-01-01

    Full Text Available Aims. The study aimed to present the experience of a screening programme for early detection of diabetic retinopathy (DR using a nonmydriatic fundus camera, evaluating the feasibility in terms of validity, resources absorption, and future advantages of a potential application, in an Italian local health authority. Methods. Diabetic patients living in the town of Ponzano, Veneto Region (Northern Italy, were invited to be enrolled in the screening programme. The “no prevention strategy” with the inclusion of the estimation of blindness related costs was compared with screening costs in order to evaluate a future extensive and feasible implementation of the procedure, through a budget impact approach. Results. Out of 498 diabetic patients eligible, 80% was enrolled in the screening programme. 115 patients (34% were referred to an ophthalmologist and 9 cases required prompt treatment for either proliferative DR or macular edema. Based on the pilot data, it emerged that an extensive use of the investigated screening programme, within the Greater Treviso area, could prevent 6 cases of blindness every year, resulting in a saving of €271,543.32 (−13.71%. Conclusions. Fundus images obtained with a nonmydriatic fundus camera could be considered an effective, cost-sparing, and feasible screening tool for the early detection of DR, preventing blindness as a result of diabetes.

  20. Fully integrated digital GAMMA camera-computer system

    International Nuclear Information System (INIS)

    Berger, H.J.; Eisner, R.L.; Gober, A.; Plankey, M.; Fajman, W.

    1985-01-01

    Although most of the new non-nuclear imaging techniques are fully digital, there has been a reluctance in nuclear medicine to abandon traditional analog planar imaging in favor of digital acquisition and display. The authors evaluated a prototype digital camera system (GE STARCAM) in which all of the analog acquisition components are replaced by microprocessor controls and digital circuitry. To compare the relative effects of acquisition matrix size on image quality and to ascertain whether digital techniques could be used in place of analog imaging, Tc-99m bone scans were obtained on this digital system and on a comparable analog camera in 10 patients. The dedicated computer is used for camera setup including definition of the energy window, spatial energy correction, and spatial distortion correction. The display monitor, which is used for patient positioning and image analysis, is 512/sup 2/ non-interlaced, allowing high resolution imaging. Data acquisition and processing can be performed simultaneously. Thus, the development of a fully integrated digital camera-computer system with optimized display should allow routine utilization of non-analog studies in nuclear medicine and the ultimate establishment of fully digital nuclear imaging laboratories

  1. Spectral colors capture and reproduction based on digital camera

    Science.gov (United States)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  2. Smartphone Fundus Photography.

    Science.gov (United States)

    Nazari Khanamiri, Hossein; Nakatsuka, Austin; El-Annan, Jaafar

    2017-07-06

    Smartphone fundus photography is a simple technique to obtain ocular fundus pictures using a smartphone camera and a conventional handheld indirect ophthalmoscopy lens. This technique is indispensable when picture documentation of optic nerve, retina, and retinal vessels is necessary but a fundus camera is not available. The main advantage of this technique is the widespread availability of smartphones that allows documentation of macula and optic nerve changes in many settings that was not previously possible. Following the well-defined steps detailed here, such as proper alignment of the phone camera, handheld lens, and the patient's pupil, is the key for obtaining a clear retina picture with no interfering light reflections and aberrations. In this paper, the optical principles of indirect ophthalmoscopy and fundus photography will be reviewed first. Then, the step-by-step method to record a good quality retinal image using a smartphone will be explained.

  3. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  4. Printed products for digital cameras and mobile devices

    Science.gov (United States)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  5. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  6. Color reproduction software for a digital still camera

    Science.gov (United States)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  7. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  8. Programmable electronic system for analog and digital gamma cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Omeu, E. J.

    2013-01-01

    At present the use of analog and digital gamma cameras is continuously increasing in developing countries. Many of them still largely rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. For this reason worldwide there are different medical equipment manufacturing companies engaged into partial or total Gamma Cameras modernization. Nevertheless in several occasions acquisition prices are not affordable for developing countries. This work describes the basic features of a programmable electronic system that allows improving acquisitions functions and processing of analog and digital gamma cameras. This system is based on an electronic board for the acquisition and digitization of nuclear pulses which have been generated by gamma camera detector. It comprises a hardware interface with PC and the associated software to fully signal processing. Signal shaping and image processing are included. The extensive use of reference tables in the processing and signal imaging software allowed the optimization of the processing speed. Time design and system cost were also decreased. (Author)

  9. Development of digital shade guides for color assessment using a digital camera with ring flashes.

    Science.gov (United States)

    Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan

    2011-02-01

    Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.

  10. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  11. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  12. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    Science.gov (United States)

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  13. Automated Detection and Differentiation of Drusen, Exudates, and Cotton-Wool Spots in Digital Color Fundus Photographs for Diabetic Retinopathy Diagnosis

    NARCIS (Netherlands)

    Niemeijer, M.; van Ginneken, B.; Russel, S.R.; Suttorp-Schulten, M.S.A.; Abràmoff, M.D.

    2007-01-01

    purpose. To describe and evaluate a machine learning-based, automated system to detect exudates and cotton-wool spots in digital color fundus photographs and differentiate them from drusen, for early diagnosis of diabetic retinopathy. methods. Three hundred retinal images from one eye of 300

  14. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  15. Textureless Macula Swelling Detection with Multiple Retinal Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Grisan, Enrico [University of Padua, Padua, Italy; Favaro, Paolo [Heriot-Watt University, Edinburgh; Ruggeri, Alfredo [University of Padua, Padua, Italy; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2010-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. We also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.

  16. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    Science.gov (United States)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  17. Digital subtraction angiography with an Isocon camera system: clinical applications

    International Nuclear Information System (INIS)

    Barbaric, Z.L.; Gomes, A.S.; Deckard, M.E.; Nelson, R.S.; Moler, C.L.

    1984-01-01

    A new imaging system for digital subtraction angiography (DSA) was evaluated in 30 clinical studies. The image receptor is a 25 X 25 cm, 12 par gadolinium oxysulfate rare-earth screen whose light output is focused to a low-light-level Isocon camera. The video signal is digitized and processed by an image-array processor containing 31 512 X 512 memories 8 bits deep. In most patients, intraarterial DSA studies were done in conjunction with conventional arteriography. In these arterial studies, images adequate to make a specific diagnosis were obtained using half the radiation dose and half the amount of contrast material needed for conventional angiography. In eight intravenous studies performed either to identify renal artery stenosis or for evaluation of congenital heart anomalies, the images were diagnostic but objectionably noisy

  18. Characterization of a digital camera as an absolute tristimulus colorimeter

    Science.gov (United States)

    Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual

    2003-01-01

    An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.

  19. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  20. Telemedicine for a General Screening of Retinal Disease Using Nonmydriatic Fundus Cameras in Optometry Centers: Three-Year Results.

    Science.gov (United States)

    Zapata, Miguel A; Arcos, Gabriel; Fonollosa, Alex; Abraldes, Maximino; Oleñik, Andrea; Gutierrez, Estanislao; Garcia-Arumi, Jose

    2017-01-01

    Describe the first 3 years of highly specialized retinal screening through a web platform using a retinologists' network for image reading. All patients who came to centers in the network and consented to fundus photography were included. Images were evaluated by ophthalmologists. We describe number of patients, age, visual acuity, retinal abnormalities, medical recommendations, and factors associated with abnormal retinographies. Fifty thousand three hundred eighty-four patients were included; mean age 52.3 years (range 3-99). Mean visual acuity 20/25. Of the total cohort, 75% had normal retinographies, 22% had abnormalities, 1% referred acute floaters, 1% referred acute symptoms with normal retinography, and 1% could not be assessed. Ophthalmological referral was recommended in 12,634 patients: 9% urgent visit, 11% preferential (2-3 weeks), and 80% an ordinary visit. Age-related maculopathy signs were the most common abnormalities (2,456 patients, 4.8%). Epiretinal membrane was the second (764 cases, 1.5%). Diabetic retinopathy was suspected in 543 patients (1%), and nevi in 358 patients (0.7%). Patients older than 50 years had significantly more retinal abnormalities (31.5%) than younger ones (11.1%) (p < 0.0001; odds ratio [OR] 2.47; confidence interval [CI] 2.37-2.57). Patients with almost one eye with a myopic defect greater than -5 spherical equivalent had a higher risk of presenting abnormalities (p < 0.001; OR 1.04; CI 1.03-1.05). A high rate of asymptomatic retinal abnormalities was detected in this general screening, justifying this practice. Many patients who visit optometrists in Spain are unaware that they would benefit from ophthalmological monitoring. The ophthalmic community should lead initiatives of the type presented to preserve and guarantee quality standards.

  1. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis.

    Science.gov (United States)

    Niemeijer, Meindert; van Ginneken, Bram; Russell, Stephen R; Suttorp-Schulten, Maria S A; Abràmoff, Michael D

    2007-05-01

    To describe and evaluate a machine learning-based, automated system to detect exudates and cotton-wool spots in digital color fundus photographs and differentiate them from drusen, for early diagnosis of diabetic retinopathy. Three hundred retinal images from one eye of 300 patients with diabetes were selected from a diabetic retinopathy telediagnosis database (nonmydriatic camera, two-field photography): 100 with previously diagnosed bright lesions and 200 without. A machine learning computer program was developed that can identify and differentiate among drusen, (hard) exudates, and cotton-wool spots. A human expert standard for the 300 images was obtained by consensus annotation by two retinal specialists. Sensitivities and specificities of the annotations on the 300 images by the automated system and a third retinal specialist were determined. The system achieved an area under the receiver operating characteristic (ROC) curve of 0.95 and sensitivity/specificity pairs of 0.95/0.88 for the detection of bright lesions of any type, and 0.95/0.86, 0.70/0.93, and 0.77/0.88 for the detection of exudates, cotton-wool spots, and drusen, respectively. The third retinal specialist achieved pairs of 0.95/0.74 for bright lesions and 0.90/0.98, 0.87/0.98, and 0.92/0.79 per lesion type. A machine learning-based, automated system capable of detecting exudates and cotton-wool spots and differentiating them from drusen in color images obtained in community based diabetic patients has been developed and approaches the performance level of retinal experts. If the machine learning can be improved with additional training data sets, it may be useful for detecting clinically important bright lesions, enhancing early diagnosis, and reducing visual loss in patients with diabetes.

  2. Comparison of film and digital fundus photographs in eyes of individuals with diabetes mellitus

    DEFF Research Database (Denmark)

    Gangaputra, Sapna; Almukhtar, Talat; Glassman, Adam R

    2011-01-01

    To compare grading of diabetic retinopathy (DR) and diabetic macular edema (DME) from stereoscopic film versus stereoscopic digital photographs obtained from a subset of Diabetic Retinopathy Clinical Research Network (DRCR.net) participants.......To compare grading of diabetic retinopathy (DR) and diabetic macular edema (DME) from stereoscopic film versus stereoscopic digital photographs obtained from a subset of Diabetic Retinopathy Clinical Research Network (DRCR.net) participants....

  3. A digital variable persistence oscilloscope for gamma cameras

    International Nuclear Information System (INIS)

    Fenwick, J.D.; Thompson, A.

    1981-01-01

    The system briefly described is intended as a direct replacement for the analogue persistence oscilloscope, particularly in systems without a computer processor. It uses digital and video techniques to produce an image quality suitable for use in positioning patients under the camera at a low cost (total cost of materials used, Pound500). The performance is superior to the analogue oscilloscope in that the image is displayed with 16 shades of grey. It incorporates an automatic brightness control which ensures that the image does not saturate at high count density, and the saturation can be changed manually allowing areas of low counts to be examined in the presence of high counts. The digital inability to store each single event as a dot which fades exponentially with time has been solved by adding each event into the appropriate cell of a digital display matrix, and then periodically dividing the contents of each image cell by two. The cells are addressed and divided in a pseudo-random pattern so that, to the observer, the whole image appears to fade smoothly and evenly. (U.K.)

  4. Kodak Digital Camera and The Lost Business Opportunity

    Institute of Scientific and Technical Information of China (English)

    Hamsa; Thota

    2012-01-01

    A systematic study of Kodak’s annual operations and business strategies during 2000-2010revealed that Kodak management faltered in transitioning the Kodak Company from an analog business model to a digital business model.In 2000 Kodak delivered strong performance and it appeared to be smart to be in the picture business.In 2002Kodak was the best-performing stock among companies that made up the Dow Jones Industrial Average.In 2005Kodak future looked bright.A confident Chairman and CEO Antonio M.Perez pronounced that by 2008he expected all of Kodak’s businesses to be leaders in their industry segments.In 2008Kodak remained as the most recognized and respected brands in the world but it played in the hyper competitive markets in which price and technological advances drove the market.So Kodak was unable to reap premium prices from its famous brand and it became a nonviable business due to sustained losses from continuing operations.During 2008-2012 Kodak fell from being a market leader to becoming a bankrupt Company.Using the analogy of"behind the power curve",this article shines light on Kodak’s crash to the ground,i.e.bankruptcy filing in 2012and asserts that Kodak management triggered the process of falling behind the power curve in 2000when it embraced the infoimaging strategy to extend the benefits of film.Kodak’s 2003digital business model and other strategies that followed it did not allow Kodak to become a strong competitor in the digital world.Kodak digital camera business became a lost business opportunity.

  5. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  6. PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2012-07-01

    Full Text Available Z/I Imaging introduced with the DMC II 140, 230 and 250 digital aerial cameras with a very large format CCD for the panchromatic channel. The CCDs have with 140 / 230 / 250 mega pixel a size not available in photogrammetry before. CCDs in general have a very high relative accuracy, but the overall geometry has to be checked as well as the influence of not flat CCDs. A CCD with a size of 96mm × 82mm must have a flatness or knowledge of flatness in the range of 1μm if the camera accuracy in the range of 1.3μm shall not be influenced. The DMC II cameras have been evaluated with three different flying heights leading to 5cm, 9cm and 15cm or 20cm GSD, crossing flight lines and 60% side lap. The optimal test conditions guaranteed the precise determination of the object coordinates as well as the systematic image errors. All three camera types show only very small systematic image errors, ranging in the root mean square between 0.12μm up to 0.3μm with extreme values not exceeding 1.6μm. The remaining systematic image errors, determined by analysis of the image residuals and not covered by the additional parameters, are negligible. A standard deviation of the object point heights below the GSD, determined at independent check points, even in blocks with just 20% side lap and 60% end lap is standard. Corresponding to the excellent image geometry the object point coordinates are only slightly influenced by the self calibration. For all DMCII types the handling of image models for data acquisition must not be supported by an improvement of the image coordinates by the determined systematic image errors. Such an improvement up to now is not standard for photogrammetric software packages. The advantage of a single monolithic CCD is obvious. An edge analysis of pan-sharpened DMC II 250 images resulted in factors for the effective resolution below 1.0. The result below 1.0 is only possible by contrast enhancement, but this requires with low image noise

  7. Using Digital Cameras to Detect Warning Signs of Volcanic Eruptions

    Science.gov (United States)

    Girona, T.; Huber, C.; Trinh, K. T.; Protti, M.; Pacheco, J. F.

    2017-12-01

    Monitoring volcanic outgassing is fundamental to improve the forecasting of volcanic eruptions. Recent efforts have led to the advent of new methods to measure the concentration and flux of volcanic gases with unprecedented temporal resolution, thus allowing us to obtain reliable high-frequency (up to 1 Hz) time series of outgassing activity. These high-frequency methods have shown that volcanic outgassing can be periodic sometimes (with periodicities ranging from 101 s to 103 s), although it remains unknown whether the spectral features of outgassing reflect the processes that ultimately trigger volcanic unrest and eruptions. In this study, we explore the evolution of the spectral content of the outgassing activity of Turrialba volcano (Costa Rica) using digital images (with digital brightness as a proxy for the emissions of water vapor [Girona et al., 2015]). Images were taken at 1 km distance with 1 Hz sampling rate, and the time period analyzed (from April 2016 to April 2017) is characterized by episodes of quiescent outgassing, ash explosions, and sporadic eruptions of ballistics. Our preliminary results show that: 1) quiescent states of Turrialba volcano are characterized by outgassing frequency spectra with fractal distribution; 2) superimposed onto the fractal frequency spectra, well-defined pulses with period around 100 s emerge hours to days before some of the eruptions of ballistics. An important conclusion of this study is that digital cameras can be potentially used in real-time volcano monitoring to detect warning signs of eruptions, as well as to better understand subsurface processes and track the changing conditions below volcanic craters. Our ongoing study also explores the correlation between the evolution of the spectral content of outgassing, infrasound data, and shallow seismicity. Girona, T., F. Costa, B. Taisne, B. Aggangan, and S. Ildefonso (2015), Fractal degassing from Erebus and Mayon volcanoes revealed by a new method to monitor H2O

  8. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    Science.gov (United States)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  9. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  10. Fundus Photography in the 21st Century—A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare

    Science.gov (United States)

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A.; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han

    2016-01-01

    Abstract Background: The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Literature Review: Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Results: Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. Conclusions: In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available. PMID:26308281

  11. High-Resolution Imaging of Parafoveal Cones in Different Stages of Diabetic Retinopathy Using Adaptive Optics Fundus Camera.

    Directory of Open Access Journals (Sweden)

    Mohamed Kamel Soliman

    Full Text Available To assess cone density as a marker of early signs of retinopathy in patients with type II diabetes mellitus.An adaptive optics (AO retinal camera (rtx1™; Imagine Eyes, Orsay, France was used to acquire images of parafoveal cones from patients with type II diabetes mellitus with or without retinopathy and from healthy controls with no known systemic or ocular disease. Cone mosaic was captured at 0° and 2°eccentricities along the horizontal and vertical meridians. The density of the parafoveal cones was calculated within 100×100-μm squares located at 500-μm from the foveal center along the orthogonal meridians. Manual corrections of the automated counting were then performed by 2 masked graders. Cone density measurements were evaluated with ANOVA that consisted of one between-subjects factor, stage of retinopathy and the within-subject factors. The ANOVA model included a complex covariance structure to account for correlations between the levels of the within-subject factors.Ten healthy participants (20 eyes and 25 patients (29 eyes with type II diabetes mellitus were recruited in the study. The mean (± standard deviation [SD] age of the healthy participants (Control group, patients with diabetes without retinopathy (No DR group, and patients with diabetic retinopathy (DR group was 55 ± 8, 53 ± 8, and 52 ± 9 years, respectively. The cone density was significantly lower in the moderate nonproliferative diabetic retinopathy (NPDR and severe NPDR/proliferative DR groups compared to the Control, No DR, and mild NPDR groups (P < 0.05. No correlation was found between cone density and the level of hemoglobin A1c (HbA1c or the duration of diabetes.The extent of photoreceptor loss on AO imaging may correlate positively with severity of DR in patients with type II diabetes mellitus. Photoreceptor loss may be more pronounced among patients with advanced stages of DR due to higher risk of macular edema and its sequelae.

  12. Automatic astronomical coordinate determination using digital zenith cameras

    Directory of Open Access Journals (Sweden)

    S Farzaneh

    2009-12-01

    Full Text Available Celestial positioning has been used for navigation purposes for many years. Stars as the extra-terrestrial benchmarks provide unique opportunity in absolute point positioning. However, astronomical field data acquisition and data processing of the collected data is very time-consuming. The advent of the Global Positioning System (GPS nearly made the celestial positioning system obsolete. The new satellite-based positioning system has been very popular since it is very efficient and convenient for many daily life applications. Nevertheless, the celestial positioning method is never replaced by satellite-based positioning in absolute point positioning sense. The invention of electro-optical devices at the beginning of the 21st century was really a rebirth in geodetic astronomy. Today, the digital cameras with relatively high geometric and radiometric accuracy has opened a new insight in satellite attitude determination and the study of the Earth's surface geometry and physics of its interior, i.e., computation of astronomical coordinates and the vertical deflection components. This method or the so-called astrogeodetic vision-based method help us to determine astronomical coordinates with an accuracy better than 0.1 arc second. The theoretical background, an innovative transformation approach and the preliminary numerical results are addressed in this paper.

  13. The role of camera-bundled image management software in the consumer digital imaging value chain

    Science.gov (United States)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  14. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  15. Selecting the right digital camera for telemedicine-choice for 2009.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  16. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    Science.gov (United States)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  17. 77 FR 43858 - Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and...

    Science.gov (United States)

    2012-07-26

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-703] Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof; Determination To Review... importation, and the sale within the United States after importation of certain mobile telephones and wireless...

  18. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    Science.gov (United States)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  19. Issues in implementing services for a wireless web-enabled digital camera

    Science.gov (United States)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  20. Retinal fundus imaging with a plenoptic sensor

    Science.gov (United States)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  1. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  2. Hyperspectral fundus imager

    Science.gov (United States)

    Truitt, Paul W.; Soliz, Peter; Meigs, Andrew D.; Otten, Leonard John, III

    2000-11-01

    A Fourier Transform hyperspectral imager was integrated onto a standard clinical fundus camera, a Zeiss FF3, for the purposes of spectrally characterizing normal anatomical and pathological features in the human ocular fundus. To develop this instrument an existing FDA approved retinal camera was selected to avoid the difficulties of obtaining new FDA approval. Because of this, several unusual design constraints were imposed on the optical configuration. Techniques to calibrate the sensor and to define where the hyperspectral pushbroom stripe was located on the retina were developed, including the manufacturing of an artificial eye with calibration features suitable for a spectral imager. In this implementation the Fourier transform hyperspectral imager can collect over a hundred 86 cm-1 spectrally resolved bands with 12 micro meter/pixel spatial resolution within the 1050 nm to 450 nm band. This equates to 2 nm to 8 nm spectral resolution depending on the wavelength. For retinal observations the band of interest tends to lie between 475 nm and 790 nm. The instrument has been in use over the last year successfully collecting hyperspectral images of the optic disc, retinal vessels, choroidal vessels, retinal backgrounds, and macula diabetic macular edema, and lesions of age-related macular degeneration.

  3. Comparison of digital color fundus imaging and fluorescein angiographic findings for the early detection of diabetic retinopathy in young type 1 diabetic patients.

    Science.gov (United States)

    Kapsala, Z; Anastasakis, A; Mamoulakis, D; Maniadaki, I; Tsilimbaris, M

    2018-01-01

    To compare the findings from digital 7-field color fundus (CF) photography and fundus fluorescein angiography (FFA) in young patients with diabetes mellitus (DM) type 1 without known diabetic retinopathy. In this prospective, observational cohort study, 54 type 1 diabetic patients were recruited. Participants had been diagnosed with diabetes mellitus (DM) for at least 6 years, had Best Corrected Visual Acuity of 20/25 or better and did not have any known retinal pathology. One hundred and seven eyes were analyzed. All patients underwent a complete ophthalmic examination in the Retina Service of a University Eye Clinic including digital CF imaging and FFA. The mean age of the patients was 18.6 years. Mean duration of DM was 11.3 years, and mean haemoglobin A1c (HbA1c) level was 8.6%. Of the 107 eyes, 8 eyes (7.5%) showed microvascular abnormalities on CF images, while FFA images revealed changes in 26 eyes (24.3%). Hence, 18 of the 26 eyes showing abnormalities on FFA did not show any abnormalities on CF images. Mean DM duration in the patient group with detectable microvascular changes was found to be significantly higher compared to patients without changes, while no difference in HbA1c levels, serum lipid levels or blood pressure was observed. Comparison of digital CF and FFA findings for the detection of diabetic microvascular changes in type 1 diabetic patients showed that FFA reveals more information about retinal vascular pathology for early detection of diabetic retinopathy. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  4. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    Directory of Open Access Journals (Sweden)

    Jan Mertens

    2017-10-01

    Full Text Available Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens.

  5. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  6. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    Science.gov (United States)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  7. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  8. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    Science.gov (United States)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  9. Training in remote monitoring technology. Digital camera module-14(DCM-14)

    International Nuclear Information System (INIS)

    Caskey, Susan

    2006-01-01

    The DCM-14 (Digital Camera Module) is the backbone of current IAEA remote monitoring surveillance systems. The control module is programmable with features for encryption, authentication, image compression and scene change detection. It can take periodic or triggered images under a variety of time sequences. This training session covered the DCM-14 features and related programming in DCMSET. It also described the processes for receiving, archiving and backing up the camera images using DCMPOLL and GEMINI software. Setting up a DCM-14 camera controller in the configuration of the remote monitoring system at Joyo formed an exercise. (author)

  10. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    Science.gov (United States)

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  11. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  12. Automated Meteor Detection by All-Sky Digital Camera Systems

    Czech Academy of Sciences Publication Activity Database

    Suk, Tomáš; Šimberová, Stanislava

    2017-01-01

    Roč. 120, č. 3 (2017), s. 189-215 ISSN 0167-9295 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985815 ; RVO:67985556 Keywords : meteor detection * autonomous fireball observatories * fish-eye camera * Hough transformation Subject RIV: IN - Informatics, Computer Science; BN - Astronomy, Celestial Mechanics, Astrophysics (ASU-R) OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8); Astronomy (including astrophysics,space science) (ASU-R) Impact factor: 0.875, year: 2016

  13. How to photograph the Moon and planets with your digital camera

    CERN Document Server

    Buick, Tony

    2007-01-01

    Since the advent of astronomical CCD imaging it has been possible for amateurs to produce images of a quality that was attainable only by universities and professional observatories just a decade ago. However, astronomical CCD cameras are still very expensive, and technology has now progressed so that digital cameras - the kind you use on holiday - are more than capable of photographing the brighter astronomical objects, notably the Moon and major planets. Tony Buick has worked for two years on the techniques involved, and has written this illustrated step-by-step manual for anyone who has a telescope (of any size) and a digital camera. The color images he has produced - there are over 300 of them in the book - are of breathtaking quality. His book is more than a manual of techniques (including details of how to make a low-cost DIY camera mount) and examples; it also provides a concise photographic atlas of the whole of the nearside of the Moon - with every image made using a standard digital camera - and des...

  14. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    Directory of Open Access Journals (Sweden)

    Bruno Roux

    2008-11-01

    Full Text Available The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1 the use of unprocessed image data did not improve the results of image analyses; 2 vignetting had a significant effect, especially for the modified camera, and 3 normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  15. The hardware and software design for digital data acquisition system of γ-camera

    International Nuclear Information System (INIS)

    Zhang Chong; Jin Yongjie

    2006-01-01

    The digital data acquisition system is presented, which are used to update the traditional γ-cameras, including hardware and software. The system has many advantages such as small volume, various functions, high-quality image, low cost, extensible, and so on. (authors)

  16. Euratom multi-camera optical surveillance system (EMOSS) - a digital solution

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.G.; Taillade, B.; Pryck, C. de.

    1991-01-01

    In 1989 the Euratom Safeguards Directorate of the Commission of the European Communities drew up functional and draft technical specifications for a new fully digital multi-camera optical surveillance system. HYMATOM of Castries designed and built a prototype unit for laboratory and field tests. This paper reports and system design and first test results

  17. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    Science.gov (United States)

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  18. COMPARISON OF DIGITAL SURFACE MODELS FOR SNOW DEPTH MAPPING WITH UAV AND AERIAL CAMERAS

    Directory of Open Access Journals (Sweden)

    R. Boesch

    2016-06-01

    Full Text Available Photogrammetric workflows for aerial images have improved over the last years in a typically black-box fashion. Most parameters for building dense point cloud are either excessive or not explained and often the progress between software releases is poorly documented. On the other hand, development of better camera sensors and positional accuracy of image acquisition is significant by comparing product specifications. This study shows, that hardware evolutions over the last years have a much stronger impact on height measurements than photogrammetric software releases. Snow height measurements with airborne sensors like the ADS100 and UAV-based DSLR cameras can achieve accuracies close to GSD * 2 in comparison with ground-based GNSS reference measurements. Using a custom notch filter on the UAV camera sensor during image acquisition does not yield better height accuracies. UAV based digital surface models are very robust. Different workflow parameter variations for ADS100 and UAV camera workflows seem to have only random effects.

  19. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  20. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing

    2015-11-25

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  1. Effect of camera temperature variations on stereo-digital image correlation measurements

    KAUST Repository

    Pan, Bing; Shi, Wentao; Lubineau, Gilles

    2015-01-01

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30–50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested.

  2. [Medical and dental digital photography. Choosing a cheap and user-friendly camera].

    Science.gov (United States)

    Chossegros, C; Guyot, L; Mantout, B; Cheynet, F; Olivi, P; Blanc, J-L

    2010-04-01

    Digital photography is more and more important in our everyday medical practice. Patient data, medico-legal proof, remote diagnosis, forums, and medical publications are some of the applications of digital photography in medical and dental fields. A lot of small, light, and cheap cameras are on the market. The main issue is to obtain good, reproducible, cheap, and easy-to-shoot pictures. Every medical situation, portrait in esthetic surgery, skin photography in dermatology, X-ray pictures or intra-oral pictures, for example, has its own requirements. For these reasons, we have tried to find an "ideal" compact digital camera. The Sony DSC-T90 (and its T900 counterpart with a wider screen) seems a good choice. Its small size makes it usable in every situation and its price is low. An external light source and a free photo software (XnView((R))) can be useful complementary tools. The main adjustments and expected results are discussed.

  3. The diagnostic accuracy of single- and five-field fundus photography in diabetic retinopathy screening by primary care physicians.

    Science.gov (United States)

    Srihatrai, Parinya; Hlowchitsieng, Thanita

    2018-01-01

    The aim is to evaluate the diagnostic accuracy of digital fundus photography in diabetic retinopathy (DR) screening at a single university hospital. This was a cross-sectional hospital-based study. One hundred and ninety-eight diabetic patients were recruited for comprehensive eye examination by two ophthalmologists. Five-field fundus photographs were taken with a digital, nonmydriatic fundus camera, and trained primary care physicians then graded the severity of DR present by single-field 45° and five-field fundus photography. Sensitivity and specificity of DR grading were reported using the findings from the ophthalmologists' examinations as a gold standard. When fundus photographs of the participants' 363 eyes were analyzed for the presence of DR, there was substantial agreement between the two primary care physicians, κ = 0.6226 for single-field and 0.6939 for five-field photograph interpretation. The sensitivity and specificity of DR detection with single-field photographs were 70.7% (95% Confidence interval [CI]; 60.2%-79.7%) and 99.3% (95% CI; 97.4%-99.9%), respectively. Sensitivity and specificity for five-field photographs were 84.5% (95% CI; 75.8%-91.1%) and 98.6% (95% CI; 96.5%-99.6%), respectively. The receiver operating characteristic was 0.85 (0.80-0.90) for single-field photographs and 0.92 (0.88-0.95) for five-field photographs. The sensitivity and specificity of fundus photographs for DR detection by primary care physicians were acceptable. Single- and five-field digital fundus photography each represent a convenient screening tool with acceptable accuracy.

  4. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    Science.gov (United States)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  5. Clinical significance of non-mydriatic fundus photography in screening for preschool children ocular fundus disease

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2014-06-01

    Full Text Available AIM: To observe the incidence of ocular fundus disease in preschool children examined by non-mydriatic fundus camera and evaluate its effectiveness compared with direct inspection shadow mirror. METHODS: Three thousand eight hundred and ninety-six preschool children from April 2012 to October 2013 were examined by Topcon TRC-NW300 color fluorescence fundus camera and direct inspection shadow mirror, and images were saved immediately. RESULTS: Detection rate of non-mydriatic fundus photography was higher than that of direct inspection shadow mirror. In 3 896 cases, 41 eyes were detected abnormal fundus accounting for 1.05%. The retinal myelinated nerve fibers, morning glory syndrome, retinitis pigmentosa, congenital retinoschisis were common, accounted for 24.39%, 21.95%, 14.63%, 12.20% respectively. The children eye diseases were often accompanied by abnormal vision(68.30%, ametropia(63.41%, strabismus(19.51%.CONCLUSION: Non-mydriatic fundus photography is a mydriatic method without medicine, so it is easy for preschool children to accept. Image results could directly display the fundus lesions. It shows important significance in the screening for preschool children eye diseases.

  6. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  7. A 3D technique for simulation of irregular electron treatment fields using a digital camera

    International Nuclear Information System (INIS)

    Bassalow, Roustem; Sidhu, Narinder P.

    2003-01-01

    Cerrobend inserts, which define electron field apertures, are manufactured at our institution using perspex templates. Contours are reproduced manually on these templates at the simulator from the field outlines drawn on the skin or mask of a patient. A previously reported technique for simulation of electron treatment fields uses a digital camera to eliminate the need for such templates. However, avoidance of the image distortions introduced by non-flat surfaces on which the electron field outlines were drawn could only be achieved by limiting the application of this technique to surfaces which were flat or near flat. We present a technique that employs a digital camera and allows simulation of electron treatment fields contoured on an anatomical surface of an arbitrary three-dimensional (3D) shape, such as that of the neck, extremities, face, or breast. The procedure is fast, accurate, and easy to perform

  8. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    Czech Academy of Sciences Publication Activity Database

    Heller, M.; Schioppa, E.jr.; Porcelli, A.; Pujadas, I.T.; Zietara, K.; della Volpe, D.; Montaruli, T.; Cadoux, F.; Favre, Y.; Aguilar, J.A.; Christov, A.; Prandini, E.; Rajda, P.; Rameez, M.; Bilnik, W.; Blocki, J.; Bogacz, L.; Borkowski, J.; Bulik, T.; Frankowski, A.; Grudzinska, M.; Idzkowski, B.; Jamrozy, M.; Janiak, M.; Kasperek, J.; Lalik, K.; Lyard, E.; Mach, E.; Mandát, Dušan; Marszalek, A.; Medina Miranda, L. D.; Michałowski, J.; Moderski, R.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Pasko, P.; Pech, Miroslav; Schovánek, Petr; Seweryn, K.; Sliusar, V.; Skowron, K.; Stawarz, L.; Stodulska, M.; Stodulski, M.; Walter, R.; Wiecek, M.; Zagdanski, A.

    2017-01-01

    Roč. 77, č. 1 (2017), s. 1-31, č. článku 47. ISSN 1434-6044 R&D Projects: GA MŠk LE13012; GA MŠk LG14019 Institutional support: RVO:68378271 Keywords : silicon photomultiplier * digitizing camera * gamma-ray astronomy Subject RIV: BF - Elementary Particles and High Energy Physics OBOR OECD: Particles and field physics Impact factor: 5.331, year: 2016

  9. PhenoCam Dataset v1.0: Digital Camera Imagery from the PhenoCam Network, 2000-2015

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset provides a time series of visible-wavelength digital camera imagery collected through the PhenoCam Network at each of 133 sites in North America and...

  10. Use of a Digital Camera to Monitor the Growth and Nitrogen Status of Cotton

    Directory of Open Access Journals (Sweden)

    Biao Jia

    2014-01-01

    Full Text Available The main objective of this study was to develop a nondestructive method for monitoring cotton growth and N status using a digital camera. Digital images were taken of the cotton canopies between emergence and full bloom. The green and red values were extracted from the digital images and then used to calculate canopy cover. The values of canopy cover were closely correlated with the normalized difference vegetation index and the ratio vegetation index and were measured using a GreenSeeker handheld sensor. Models were calibrated to describe the relationship between canopy cover and three growth properties of the cotton crop (i.e., aboveground total N content, LAI, and aboveground biomass. There were close, exponential relationships between canopy cover and three growth properties. And the relationships for estimating cotton aboveground total N content were most precise, the coefficients of determination (R2 value was 0.978, and the root mean square error (RMSE value was 1.479 g m−2. Moreover, the models were validated in three fields of high-yield cotton. The result indicated that the best relationship between canopy cover and aboveground total N content had an R2 value of 0.926 and an RMSE value of 1.631 g m−2. In conclusion, as a near-ground remote assessment tool, digital cameras have good potential for monitoring cotton growth and N status.

  11. Evaluation of the algorithms for recovering reflectance from virtual digital camera response

    Directory of Open Access Journals (Sweden)

    Ana Gebejes

    2012-10-01

    Full Text Available In the recent years many new methods for quality control in graphic industry are proposed. All of these methodshave one in common – using digital camera as a capturing device and appropriate image processing method/algorithmto obtain desired information. With the development of new, more accurate sensors, digital cameras becameeven more dominant and the use of cameras as measuring device became more emphasized. The idea of using cameraas spectrophotometer is interesting because this kind of measurement would be more economical, faster, widelyavailable and it would provide a possibility of multiple colour capture with a single shot. This can be very usefulfor capturing colour targets for characterization of different properties of a print device. A lot of effort is put into enablingcommercial colour CCD cameras (3 acquisition channels to obtain enough of the information for reflectancerecovery. Unfortunately, RGB camera was not made with the idea of performing colour measurements but ratherfor producing an image that is visually pleasant for the observer. This somewhat complicates the task and seeks fora development of different algorithms that will estimate the reflectance information from the available RGB cameraresponses with minimal possible error. In this paper three different reflectance estimation algorithms are evaluated(Orthogonal projection,Wiener and optimized Wiener estimation, together with the method for reflectance approximationbased on principal component analysis (PCA. The aim was to perform reflectance estimation pixelwise and analyze the performance of some reflectance estimation algorithms locally, at some specific pixels in theimage, and globally, on the whole image. Performances of each algorithm were evaluated visually and numericallyby obtaining pixel wise colour difference and pixel wise difference of estimated reflectance to the original values. Itwas concluded that Wiener method gives the best reflectance estimation

  12. STUDY OF USING CANON 1000D DIGITAL CAMERA FOR MULTIZONE PHOTOGRAPHY WITH SPATIALLY-RESOLVED SPECTRAL DEVICES

    Directory of Open Access Journals (Sweden)

    K. N. Kaplevskiy

    2013-01-01

    Full Text Available The possibility of use a CANON 1000D digital camera for multizone photography is demonstrated. It is found that a dynamic range of the recorded light intensities is varied from 260 to 3650 quantization levels of the ADC. Linearity of the light-sensitive element of the camera has been studied in dependence on its illumination and exposure time.

  13. Morphometric Optic Nerve Head Analysis in Glaucoma Patients: A Comparison between the Simultaneous Nonmydriatic Stereoscopic Fundus Camera (Kowa Nonmyd WX3D and the Heidelberg Scanning Laser Ophthalmoscope (HRT III

    Directory of Open Access Journals (Sweden)

    Siegfried Mariacher

    2016-01-01

    Full Text Available Purpose. To investigate the agreement between morphometric optic nerve head parameters assessed with the confocal laser ophthalmoscope HRT III and the stereoscopic fundus camera Kowa nonmyd WX3D retrospectively. Methods. Morphometric optic nerve head parameters of 40 eyes of 40 patients with primary open angle glaucoma were analyzed regarding their vertical cup-to-disc-ratio (CDR. Vertical CDR, disc area, cup volume, rim volume, and maximum cup depth were assessed with both devices by one examiner. Mean bias and limits of agreement (95% CI were obtained using scatter plots and Bland-Altman analysis. Results. Overall vertical CDR comparison between HRT III and Kowa nonmyd WX3D measurements showed a mean difference (limits of agreement of −0.06 (−0.36 to 0.24. For the CDR < 0.5 group (n=24 mean difference in vertical CDR was −0.14 (−0.34 to 0.06 and for the CDR ≥ 0.5 group (n=16 0.06 (−0.21 to 0.34. Conclusion. This study showed a good agreement between Kowa nonmyd WX3D and HRT III with regard to widely used optic nerve head parameters in patients with glaucomatous optic neuropathy. However, data from Kowa nonmyd WX3D exhibited the tendency to measure larger CDR values than HRT III in the group with CDR < 0.5 group and lower CDR values in the group with CDR ≥ 0.5.

  14. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  15. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Heller, M. [DPNC-Universite de Geneve, Geneva (Switzerland); Schioppa, E. Jr; Porcelli, A.; Pujadas, I.T.; Della Volpe, D.; Montaruli, T.; Cadoux, F.; Favre, Y.; Christov, A.; Rameez, M.; Miranda, L.D.M. [DPNC-Universite de Geneve, Geneva (Switzerland); Zietara, K.; Idzkowski, B.; Jamrozy, M.; Ostrowski, M.; Stawarz, L.; Zagdanski, A. [Jagellonian University, Astronomical Observatory, Krakow (Poland); Aguilar, J.A. [DPNC-Universite de Geneve, Geneva (Switzerland); Universite Libre Bruxelles, Faculte des Sciences, Brussels (Belgium); Prandini, E.; Lyard, E.; Neronov, A.; Walter, R. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Rajda, P.; Bilnik, W.; Kasperek, J.; Lalik, K.; Wiecek, M. [AGH University of Science and Technology, Krakow (Poland); Blocki, J.; Mach, E.; Michalowski, J.; Niemiec, J.; Skowron, K.; Stodulski, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Bogacz, L. [Jagiellonian University, Department of Information Technologies, Krakow (Poland); Borkowski, J.; Frankowski, A.; Janiak, M.; Moderski, R. [Polish Academy of Science, Nicolaus Copernicus Astronomical Center, Warsaw (Poland); Bulik, T.; Grudzinska, M. [University of Warsaw, Astronomical Observatory, Warsaw (Poland); Mandat, D.; Pech, M.; Schovanek, P. [Institute of Physics of the Czech Academy of Sciences, Prague (Czech Republic); Marszalek, A.; Stodulska, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Jagellonian University, Astronomical Observatory, Krakow (Poland); Pasko, P.; Seweryn, K. [Centrum Badan Kosmicznych Polskiej Akademii Nauk, Warsaw (Poland); Sliusar, V. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Taras Shevchenko National University of Kyiv, Astronomical Observatory, Kyiv (Ukraine)

    2017-01-15

    The single-mirror small-size telescope (SST-1M) is one of the three proposed designs for the small-size telescopes (SSTs) of the Cherenkov Telescope Array (CTA) project. The SST-1M will be equipped with a 4 m-diameter segmented reflector dish and an innovative fully digital camera based on silicon photo-multipliers. Since the SST sub-array will consist of up to 70 telescopes, the challenge is not only to build telescopes with excellent performance, but also to design them so that their components can be commissioned, assembled and tested by industry. In this paper we review the basic steps that led to the design concepts for the SST-1M camera and the ongoing realization of the first prototype, with focus on the innovative solutions adopted for the photodetector plane and the readout and trigger parts of the camera. In addition, we report on results of laboratory measurements on real scale elements that validate the camera design and show that it is capable of matching the CTA requirements of operating up to high moonlight background conditions. (orig.)

  16. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    Science.gov (United States)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation

  17. Quantification of atmospheric visibility with dual digital cameras during daytime and nighttime

    Directory of Open Access Journals (Sweden)

    K. Du

    2013-08-01

    Full Text Available A digital optical method "DOM-Vis" was developed to measure atmospheric visibility. In this method, two digital pictures were taken of the same target at two different distances along the same straight line. The pictures were analyzed to determine the optical contrasts between the target and its sky background and, subsequently, visibility is calculated. A light transfer scheme for DOM-Vis was delineated, based upon which algorithms were developed for both daytime and nighttime scenarios. A series of field tests were carried out under different weather and meteorological conditions to study the impacts of such operational parameters as exposure, optical zoom, distance between the two camera locations, and distance of the target. This method was validated by comparing the DOM-Vis results with those measured using a co-located Vaisala® visibility meter. The visibility under which this study was carried out ranged from 1 to 20 km. This digital-photography-based method possesses a number of advantages compared with traditional methods. Pre-calibration of the detector with a visibility meter is not required. In addition, the application of DOM-Vis is independent of several factors like the exact distance of the target and several camera setting parameters. These features make DOM-Vis more adaptive under a variety of field conditions.

  18. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    Science.gov (United States)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  19. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  20. A TV camera system for digitizing single shot oscillograms at sweep rate of 0.1 ns/cm

    International Nuclear Information System (INIS)

    Kienlen, M.; Knispel, G.; Miehe, J.A.; Sipp, B.

    1976-01-01

    A TV camera digitizing system associated with a 5 GHz photocell-oscilloscope apparatus allows the digitizing of single shot oscillograms; with an oscilloscope sweep rate of 0.1 ns/cm an accuracy on time measurements of 4 ps is obtained [fr

  1. Feasibility study of a novel general purpose CZT-based digital SPECT camera: initial clinical results.

    Science.gov (United States)

    Goshen, Elinor; Beilin, Leonid; Stern, Eli; Kenig, Tal; Goldkorn, Ronen; Ben-Haim, Simona

    2018-03-14

    The performance of a prototype novel digital single-photon emission computed tomography (SPECT) camera with multiple pixelated CZT detectors and high sensitivity collimators (Digital SPECT; Valiance X12 prototype, Molecular Dynamics) was evaluated in various clinical settings. Images obtained in the prototype system were compared to images from an analog camera fitted with high-resolution collimators. Clinical feasibility, image quality, and diagnostic performance of the prototype were evaluated in 36 SPECT studies in 35 patients including bone (n = 21), brain (n = 5), lung perfusion (n = 3), and parathyroid (n = 3) and one study each of sentinel node and labeled white blood cells. Images were graded on a scale of 1-4 for sharpness, contrast, overall quality, and diagnostic confidence. Digital CZT SPECT provided a statistically significant improvement in sharpness and contrast in clinical cases (mean score of 3.79 ± 0.61 vs. 3.26 ± 0.50 and 3.92 ± 0.29 vs. 3.34 ± 0.47 respectively, p < 0.001 for both). Overall image quality was slightly higher for the digital SPECT but not statistically significant (3.74 vs. 3.66). CZT SPECT provided significantly improved image sharpness and contrast compared to the analog system in the clinical settings evaluated. Further studies will evaluate the diagnostic performance of the system in large patient cohorts in additional clinical settings.

  2. Procedure for fully automatic orientation of camera in digital close-range photogrammetry

    Science.gov (United States)

    Huang, Yong Ru; Trinder, John C.

    1994-03-01

    This paper presents an automatic procedure of camera orientation developed for a digital close-range photogrammetric system. In this application, small bright balls mounted on a calibration frame serve as control points, since their shape in an image is invariant to the camera position and they are always imaged as circles. To recognize the circles in the image, an edge detection algorithm is exploited to extract the circular edges with subpixel accuracy. The circles are recognized by matching the shape of these edges with the shape of an ideal circular target. The central location of the circles and their diameters can be determined from these edge points. The determination of the identification of the circles is a problem of artificial intelligence. The list of the circles in the image must be arranged in the order of the balls in 3D world. A fast search is described that is based on exploiting the available information in order to limit the number of possible alternative orders of the targets. In this way, the search can be achieved efficiently. The identification of circles results in the correct numbers being attached to the corresponding circles. Finally, the precise camera parameters are calculated by bundle adjustment.

  3. Performance of low-cost X-ray area detectors with consumer digital cameras

    International Nuclear Information System (INIS)

    Panna, A.; Gomella, A.A.; Harmon, K.J.; Chen, P.; Miao, H.; Bennett, E.E.; Wen, H.

    2015-01-01

    We constructed X-ray detectors using consumer-grade digital cameras coupled to commercial X-ray phosphors. Several detector configurations were tested against the Varian PaxScan 3024M (Varian 3024M) digital flat panel detector. These include consumer cameras (Nikon D800, Nikon D700, and Nikon D3X) coupled to a green emission phosphor in a back-lit, normal incidence geometry, and in a front-lit, oblique incidence geometry. We used the photon transfer method to evaluate detector sensitivity and dark noise, and the edge test method to evaluate their spatial resolution. The essential specifications provided by our evaluation include discrete charge events captured per mm 2 per unit exposure surface dose, dark noise in equivalents of charge events per pixel, and spatial resolution in terms of the full width at half maximum (FWHM) of the detector's line spread function (LSF). Measurements were performed using a tungsten anode X-ray tube at 50 kVp. The results show that the home-built detectors provide better sensitivity and lower noise than the commercial flat panel detector, and some have better spatial resolution. The trade-off is substantially smaller imaging areas. Given their much lower costs, these home-built detectors are attractive options for prototype development of low-dose imaging applications

  4. A fluorometric lateral flow assay for visual detection of nucleic acids using a digital camera readout.

    Science.gov (United States)

    Magiati, Maria; Sevastou, Areti; Kalogianni, Despina P

    2018-06-04

    A fluorometric lateral flow assay has been developed for the detection of nucleic acids. The fluorophores phycoerythrin (PE) and fluorescein isothiocyanate (FITC) were used as labels, while a common digital camera and a colored vinyl-sheet, acting as a cut-off optical filter, are used for fluorescence imaging. After DNA amplification by polymerase chain reaction (PCR), the biotinylated PCR product is hybridized to its complementary probe that carries a poly(dA) tail at 3΄ edge and then applied to the lateral flow strip. The hybrids are captured to the test zone of the strip by immobilized poly(dT) sequences and detected by streptavidin-fluorescein and streptavidin-phycoerythrin conjugates, through streptavidin-biotin interaction. The assay is widely applicable, simple, cost-effective, and offers a large multiplexing potential. Its performance is comparable to assays based on the use of streptavidin-gold nanoparticles conjugates. As low as 7.8 fmol of a ssDNA and 12.5 fmol of an amplified dsDNA target were detectable. Graphical abstract Schematic presentation of a fluorometric lateral flow assay based on fluorescein and phycoerythrin fluorescent labels for the detection of single-stranded (ssDNA) and double-stranded DNA (dsDNA) sequences and using a digital camera readout. SA: streptavidin, BSA: Bovine Serum Albumin, B: biotin, FITC: fluorescein isothiocyanate, PE: phycoerythrin, TZ: test zone, CZ: control zone.

  5. The central corneal light reflex ratio from photographs derived from a digital camera in young adults.

    Science.gov (United States)

    Duangsang, Suampa; Tengtrisorn, Supaporn

    2012-05-01

    To determine the normal range of Central Corneal Light Reflex Ratio (CCLRR) from photographs of young adults. A digital camera equipped with a telephoto lens with a flash attachment placed directly above the lens was used to obtain corneal light reflex photographs of 104 subjects, first with the subject fixating on the lens of the camera at a distance of 43 centimeters, and then while looking past the camera to a wall at a distance of 5.4 meters. Digital images were displayed using Adobe Photoshop at a magnification of l200%. The CCLRR was the ratio of the sum of distances between the inner margin of cornea and the central corneal light reflex of each eye to the sum of horizontal corneal diameter of each eye. Measurements were made by three technicians on all subjects, and repeated on a 16% (n=17) subsample. Mean ratios (standard deviation-SD) from near/distance measurements were 0.468 (0.012)/0.452 (0.019). Limits of the normal range, with 95% certainty, were 0.448 and 0.488 for near measurements and 0.419 and 0.484 for distance measurements. Lower and upper indeterminate zones were 0.440-0.447 and 0.489-0.497 for near measurements and 0.406-0.418 and 0.485-0.497 for distance measurements. More extreme values can be considered as abnormal. The reproducibility and repeatability of the test was good. This method is easy to perform and has potential for use in strabismus screening by paramedical personnel.

  6. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    Science.gov (United States)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  7. Ophthalmoscopy versus non-mydriatic fundus photography in the ...

    African Journals Online (AJOL)

    1990-09-01

    Sep 1, 1990 ... non-mydriatic fundus camera in the detection of diabetic retinopathy .... the external and the internal fixation lamps of the non- mydriatic ... .Rosen ES, Raines M, Hancock R. Use of non-mydriatic cameras to screen diabetic ...

  8. Digital camera image analysis of faeces in detection of cholestatic jaundice in infants.

    Science.gov (United States)

    Parinyanut, Parinya; Bandisak, Tai; Chiengkriwate, Piyawan; Tanthanuch, Sawit; Sangkhathat, Surasak

    2016-01-01

    Stool colour assessment is a screening method for biliary tract obstruction in infants. This study is aimed to be a proof of concept work of digital photograph image analysis of stool colour compared to colour grading by a colour card, and the stool bilirubin level test. The total bilirubin (TB) level contents in stool samples from 17 infants aged less than 1 year, seven with confirmed cholestatic jaundice and ten healthy subjects was measured, and outcome correlated with the physical colour of the stool. The seven infants with cholestasis included 6 cases of biliary atresia and 1 case of pancreatic mass. All pre-operative stool samples in these cases were indicated as grade 1 on the stool card (stool colour in healthy infants ranges from 4 to 6). The average stool TB in the pale stool group was 43.07 μg/g compared to 101.78 μg/g in the non-pale stool group. Of the 3 colour channels assessed in the digital photographs, the blue and green light were best able to discriminate accurately between the pre-operative stool samples from infants with cholestasis and the samples from the healthy controls. With red, green, and blue (RGB) image analysis using wave level as the ANN input, the system predicts the stool TB with a relationship coefficient of 0.96, compared to 0.61 when stool colour card grading was used. Input from digital camera images of stool had a higher predictive capability compared to the standard stool colour card, indicating using digital photographs may be a useful tool for detection of cholestasis in infants.

  9. Digital camera image analysis of faeces in detection of cholestatic jaundice in infants

    Directory of Open Access Journals (Sweden)

    Parinya Parinyanut

    2016-01-01

    Full Text Available Background: Stool colour assessment is a screening method for biliary tract obstruction in infants. This study is aimed to be a proof of concept work of digital photograph image analysis of stool colour compared to colour grading by a colour card, and the stool bilirubin level test. Materials and Methods: The total bilirubin (TB level contents in stool samples from 17 infants aged less than 1 year, seven with confirmed cholestatic jaundice and ten healthy subjects was measured, and outcome correlated with the physical colour of the stool. Results: The seven infants with cholestasis included 6 cases of biliary atresia and 1 case of pancreatic mass. All pre-operative stool samples in these cases were indicated as grade 1 on the stool card (stool colour in healthy infants ranges from 4 to 6. The average stool TB in the pale stool group was 43.07 μg/g compared to 101.78 μg/g in the non-pale stool group. Of the 3 colour channels assessed in the digital photographs, the blue and green light were best able to discriminate accurately between the pre-operative stool samples from infants with cholestasis and the samples from the healthy controls. With red, green, and blue (RGB image analysis using wave level as the ANN input, the system predicts the stool TB with a relationship coefficient of 0.96, compared to 0.61 when stool colour card grading was used. Conclusion: Input from digital camera images of stool had a higher predictive capability compared to the standard stool colour card, indicating using digital photographs may be a useful tool for detection of cholestasis in infants.

  10. Airborne hyperspectral observations of surface and cloud directional reflectivity using a commercial digital camera

    Directory of Open Access Journals (Sweden)

    A. Ehrlich

    2012-04-01

    Full Text Available Spectral radiance measurements by a digital single-lens reflex camera were used to derive the directional reflectivity of clouds and different surfaces in the Arctic. The camera has been calibrated radiometrically and spectrally to provide accurate radiance measurements with high angular resolution. A comparison with spectral radiance measurements with the Spectral Modular Airborne Radiation measurement sysTem (SMART-Albedometer showed an agreement within the uncertainties of both instruments (6% for both. The directional reflectivity in terms of the hemispherical directional reflectance factor (HDRF was obtained for sea ice, ice-free ocean and clouds. The sea ice, with an albedo of ρ = 0.96 (at 530 nm wavelength, showed an almost isotropic HDRF, while sun glint was observed for the ocean HDRF (ρ = 0.12. For the cloud observations with ρ = 0.62, the cloudbow – a backscatter feature typically for scattering by liquid water droplets – was covered by the camera. For measurements above heterogeneous stratocumulus clouds, the required number of images to obtain a mean HDRF that clearly exhibits the cloudbow has been estimated at about 50 images (10 min flight time. A representation of the HDRF as a function of the scattering angle only reduces the image number to about 10 (2 min flight time.

    The measured cloud and ocean HDRF have been compared to radiative transfer simulations. The ocean HDRF simulated with the observed surface wind speed of 9 m s−1 agreed best with the measurements. For the cloud HDRF, the best agreement was obtained by a broad and weak cloudbow simulated with a cloud droplet effective radius of Reff = 4 μm. This value agrees with the particle sizes derived from in situ measurements and retrieved from the spectral radiance of the SMART-Albedometer.

  11. Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery

    Directory of Open Access Journals (Sweden)

    Matt Aitkenhead

    2016-12-01

    Full Text Available Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale, soil texture (sand, silt, clay, bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils.

  12. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    Science.gov (United States)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with Fi

  13. CALIBRATION OF LOW COST DIGITAL CAMERA USING DATA FROM SIMULTANEOUS LIDAR AND PHOTOGRAMMETRIC SURVEYS

    Directory of Open Access Journals (Sweden)

    E. Mitishita

    2012-07-01

    Full Text Available Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs. Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N. The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and

  14. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  15. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Czech Academy of Sciences Publication Activity Database

    Pospíšil, Jaroslav; Jakubík, P.; Machala, L.

    2005-01-01

    Roč. 116, - (2005), s. 573-585 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : random-target measuring method * light-reflection white - noise target * digital video camera * modulation transfer function * power spectral density Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.395, year: 2005

  16. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  17. Measuring the Bed Load velocity in Laboratory flumes using ADCP and Digital Cameras

    Science.gov (United States)

    Conevski, Slaven; Guerrero, Massimo; Rennie, Colin; Bombardier, Josselin

    2017-04-01

    Measuring the transport rate and apparent velocity of the bedload is notoriously hard and there is not a certain technique that would obtain continues data. There are many empirical models, based on the estimation of the shear stress, but only few involve direct measurement of the bed load velocity. The bottom tracking (BT) mode of an acoustic Doppler current profiler (ADCP) has been used many times to estimate the apparent velocity of the bed load. Herein is the basic idea, to exploit the bias of the BT signal towards the bed load movement and to calibrate this signal with traditional measuring techniques. These measurements are quite scarce and seldom reliable since there are not taken in controlled conditions. So far, no clear confirmation has been conducted in laboratory-controlled conditions that would attest the assumptions made in the estimation of the apparent bed load velocity, nor in the calibration of the empirical equations. Therefore, this study explores several experiments under stationary conditions, where the signal of the ADCP BT mode is recorded and compared to the bed load motion recorded by digital camera videography. The experiments have been performed in the hydraulic laboratories of Ottawa and Bologna, using two different ADCPs and two different high resolution cameras. In total, more then 30 experiments were performed for different sediment mixtures and different hydraulic conditions. In general, a good match is documented between the apparent bed load velocity measured by the ADCP and the videography. The slight deviation in single experiments can be explained by gravel particles inhomogeneity, difficult in reproducing the same hydro-sedimentological conditions and the randomness of the backscattering strength.

  18. Genetics Home Reference: fundus albipunctatus

    Science.gov (United States)

    ... Lorenz B, Sander B, Larsen M, Eckstein C, Rosenberg T. Lack of autofluorescence in fundus albipunctatus associated ... Preising M, Lorenz B, Sander B, Larsen M, Rosenberg T. Fundus albipunctatus associated with compound heterozygous mutations ...

  19. Respiratory-Gated MRgHIFU in Upper Abdomen Using an MR-Compatible In-Bore Digital Camera

    Directory of Open Access Journals (Sweden)

    Vincent Auboiroux

    2014-01-01

    Full Text Available Objective. To demonstrate the technical feasibility and the potential interest of using a digital optical camera inside the MR magnet bore for monitoring the breathing cycle and subsequently gating the PRFS MR thermometry, MR-ARFI measurement, and MRgHIFU sonication in the upper abdomen. Materials and Methods. A digital camera was reengineered to remove its magnetic parts and was further equipped with a 7 m long USB cable. The system was electromagnetically shielded and operated inside the bore of a closed 3T clinical scanner. Suitable triggers were generated based on real-time motion analysis of the images produced by the camera (resolution 640×480 pixels, 30 fps. Respiratory-gated MR-ARFI prepared MRgHIFU ablation was performed in the kidney and liver of two sheep in vivo, under general anaesthesia and ventilator-driven forced breathing. Results. The optical device demonstrated very good MR compatibility. The current setup permitted the acquisition of motion artefact-free and high resolution MR 2D ARFI and multiplanar interleaved PRFS thermometry (average SNR 30 in liver and 56 in kidney. Microscopic histology indicated precise focal lesions with sharply delineated margins following the respiratory-gated HIFU sonications. Conclusion. The proof-of-concept for respiratory motion management in MRgHIFU using an in-bore digital camera has been validated in vivo.

  20. Inter-Rater Reliability of Cyclotorsion Measurements Using Fundus Photography.

    Science.gov (United States)

    Dysli, Muriel; Kanku, Madeleine; Traber, Ghislaine L

    2018-04-01

    The foveo-papillary angle (FPA) on fundus photographs is the accepted standard for the measurement of ocular cyclotorsion. We assessed the inter-rater reliability of this method in healthy subjects and in patients with trochlear nerve palsies. In this methodological study, fundus photographs of healthy subjects and of patients with trochlear nerve palsies were made with a fundus camera (Zeiss Fundus Camera FF 450 plus, Jena, Germany). Three independent observers measured the FPA on the fundus photographs of all subjects in synedra View (synedra View 16, Version 16.0.0.11, Innsbruck, Austria). One hundred and four eyes of 52 subjects (26 healthy controls and 26 patients) were assessed. The mean FPA of the healthy controls was 5.80 degrees (°) [± 0.44 standard error of the mean (SEM)] compared to 11.55° (± 0.80 SEM) for patients with trochlear nerve palsies. The inter-rater reliability of all measured FPAs showed an intraclass correlation coefficient (ICC) of 0.98 (95% CI 0.97 - 0.98). The inter-rater reliability of objective cyclotorsion measurements using fundus photographs was very high. Georg Thieme Verlag KG Stuttgart · New York.

  1. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    Science.gov (United States)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  2. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  3. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    Science.gov (United States)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  4. Analysis of chemiluminescence measurements by grey-scale ICCD and colour digital cameras

    International Nuclear Information System (INIS)

    Migliorini, F; Maffi, S; De Iuliis, S; Zizak, G

    2014-01-01

    Spectral, grey-scale and colour chemiluminescence measurements of C 2 * and CH* radicals' emission are carried out on the flame front of a methane–air premixed flame at different equivalence ratios. To this purpose, properly spatially resolved optical equipment has been implemented in order to reduce the background emission from other burned gas regions. The grey-scale (ICCD + interference filters) and RGB colour (commercial digital camera) approaches have been compared in order to find a correspondence between the C 2 * and the green component, as well as the CH* and the blue component of the emission intensities. The C 2 */CH* chemiluminescence ratio has been investigated at different equivalence ratios and a good correlation has been obtained, showing the possibility of sensing the equivalence ratio in practical systems. The grey-scale and colour chemiluminescence analysis has then been applied to a meso-scale not premixed swirl combustor fuelled with a methane–air mixture and operating at 0.3 MPa. 2D results are presented and discussed in this work. (paper)

  5. New long-zoom lens for 4K super 35mm digital cameras

    Science.gov (United States)

    Thorpe, Laurence J.; Usui, Fumiaki; Kamata, Ryuhei

    2015-05-01

    The world of television production is beginning to adopt 4K Super 35 mm (S35) image capture for a widening range of program genres that seek both the unique imaging properties of that large image format and the protection of their program assets in a world anticipating future 4K services. Documentary and natural history production in particular are transitioning to this form of production. The nature of their shooting demands long zoom lenses. In their traditional world of 2/3-inch digital HDTV cameras they have a broad choice in portable lenses - with zoom ranges as high as 40:1. In the world of Super 35mm the longest zoom lens is limited to 12:1 offering a telephoto of 400mm. Canon was requested to consider a significantly longer focal range lens while severely curtailing its size and weight. Extensive computer simulation explored countless combinations of optical and optomechanical systems in a quest to ensure that all operational requests and full 4K performance could be met. The final lens design is anticipated to have applications beyond entertainment production, including a variety of security systems.

  6. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Science.gov (United States)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  7. Quantifying seasonal variation of leaf area index using near-infrared digital camera in a rice paddy

    Science.gov (United States)

    Hwang, Y.; Ryu, Y.; Kim, J.

    2017-12-01

    Digital camera has been widely used to quantify leaf area index (LAI). Numerous simple and automatic methods have been proposed to improve the digital camera based LAI estimates. However, most studies in rice paddy relied on arbitrary thresholds or complex radiative transfer models to make binary images. Moreover, only a few study reported continuous, automatic observation of LAI over the season in rice paddy. The objective of this study is to quantify seasonal variations of LAI using raw near-infrared (NIR) images coupled with a histogram shape-based algorithm in a rice paddy. As vegetation highly reflects the NIR light, we installed NIR digital camera 1.8 m above the ground surface and acquired unsaturated raw format images at one-hour intervals between 15 to 80 º solar zenith angles over the entire growing season in 2016 (from May to September). We applied a sub-pixel classification combined with light scattering correction method. Finally, to confirm the accuracy of the quantified LAI, we also conducted direct (destructive sampling) and indirect (LAI-2200) manual observations of LAI once per ten days on average. Preliminary results show that NIR derived LAI agreed well with in-situ observations but divergence tended to appear once rice canopy is fully developed. The continuous monitoring of LAI in rice paddy will help to understand carbon and water fluxes better and evaluate satellite based LAI products.

  8. Digital camera auto white balance based on color temperature estimation clustering

    Science.gov (United States)

    Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong

    2010-11-01

    Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.

  9. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2016-03-01

    Full Text Available High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device or CMOS (complementary metal oxide semiconductor camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second gain in temporal resolution by using a 25 fps camera.

  10. A pilot project combining multispectral proximal sensors and digital cameras for monitoring tropical pastures

    Science.gov (United States)

    Handcock, Rebecca N.; Gobbett, D. L.; González, Luciano A.; Bishop-Hurley, Greg J.; McGavin, Sharon L.

    2016-08-01

    Timely and accurate monitoring of pasture biomass and ground cover is necessary in livestock production systems to ensure productive and sustainable management. Interest in the use of proximal sensors for monitoring pasture status in grazing systems has increased, since data can be returned in near real time. Proximal sensors have the potential for deployment on large properties where remote sensing may not be suitable due to issues such as spatial scale or cloud cover. There are unresolved challenges in gathering reliable sensor data and in calibrating raw sensor data to values such as pasture biomass or vegetation ground cover, which allow meaningful interpretation of sensor data by livestock producers. Our goal was to assess whether a combination of proximal sensors could be reliably deployed to monitor tropical pasture status in an operational beef production system, as a precursor to designing a full sensor deployment. We use this pilot project to (1) illustrate practical issues around sensor deployment, (2) develop the methods necessary for the quality control of the sensor data, and (3) assess the strength of the relationships between vegetation indices derived from the proximal sensors and field observations across the wet and dry seasons. Proximal sensors were deployed at two sites in a tropical pasture on a beef production property near Townsville, Australia. Each site was monitored by a Skye SKR-four-band multispectral sensor (every 1 min), a digital camera (every 30 min), and a soil moisture sensor (every 1 min), each of which were operated over 18 months. Raw data from each sensor was processed to calculate multispectral vegetation indices. The data capture from the digital cameras was more reliable than the multispectral sensors, which had up to 67 % of data discarded after data cleaning and quality control for technical issues related to the sensor design, as well as environmental issues such as water incursion and insect infestations. We recommend

  11. Automatic segmentation of blood vessels from retinal fundus images ...

    Indian Academy of Sciences (India)

    The retinal blood vessels were segmented through color space conversion and color channel .... Retinal blood vessel segmentation was also attempted through multi-scale operators. A few works in this ... fundus camera at 35 degrees field of view. The image ... vessel segmentation is available from two human observers.

  12. Optimizing Low Light Level Imaging Techniques and Sensor Design Parameters using CCD Digital Cameras for Potential NASA Earth Science Research aboard a Small Satellite or ISS

    Data.gov (United States)

    National Aeronautics and Space Administration — For this project, the potential of using state-of-the-art aerial digital framing cameras that have time delayed integration (TDI) to acquire useful low light level...

  13. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    Science.gov (United States)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  14. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  15. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  16. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    Science.gov (United States)

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  17. Designing for Diverse Classrooms: Using iPpads and Digital Cameras to Compose eBooks with Emergent Bilingual/Biliterate Four-Year-Olds

    Science.gov (United States)

    Rowe, Deborah Wells; Miller, Mary E.

    2016-01-01

    This paper reports the findings of a two-year design study exploring instructional conditions supporting emerging, bilingual/biliterate, four-year-olds' digital composing. With adult support, children used child-friendly, digital cameras and iPads equipped with writing, drawing and bookmaking apps to compose multimodal, multilingual eBooks…

  18. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  19. Flow visualization of bubble behavior under two-phase natural circulation flow conditions using high speed digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Wanderley F.; Su, Jian, E-mail: wlemos@con.ufrj.br, E-mail: sujian@lasme.coppe.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Faccini, Jose L.H., E-mail: faccini@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

    2013-07-01

    The The present work aims at identifying flow patterns and measuring interfacial parameters in two-phase natural circulation by using visualization technique with high-speed digital camera. The experiments were conducted in the Natural Circulation Circuit (CCN), installed at Nuclear Engineering Institute/CNEN. The thermo-hydraulic circuit comprises heater, heat exchanger, expansion tank, the pressure relief valve and pipes to interconnect the components. A glass tube is installed at the midpoint of the riser connected to the heater outlet. The natural circulation circuit is complemented by acquisition system of values of temperatures, flow and graphic interface. The instrumentation has thermocouples, volumetric flow meter, rotameter and high-speed digital camera. The experimental study is performed through analysis of information from measurements of temperatures at strategic points along the hydraulic circuit, besides natural circulation flow rates. The comparisons between analytical and experimental values are validated by viewing, recording and processing of the images for the flows patterns. Variables involved in the process of identification of flow regimes, dimensionless parameters, the phase velocity of the flow, initial boiling point, the phenomenon of 'flashing' pre-slug flow type were obtained experimentally. (author)

  20. Flow visualization of bubble behavior under two-phase natural circulation flow conditions using high speed digital camera

    International Nuclear Information System (INIS)

    Lemos, Wanderley F.; Su, Jian; Faccini, Jose L.H.

    2013-01-01

    The The present work aims at identifying flow patterns and measuring interfacial parameters in two-phase natural circulation by using visualization technique with high-speed digital camera. The experiments were conducted in the Natural Circulation Circuit (CCN), installed at Nuclear Engineering Institute/CNEN. The thermo-hydraulic circuit comprises heater, heat exchanger, expansion tank, the pressure relief valve and pipes to interconnect the components. A glass tube is installed at the midpoint of the riser connected to the heater outlet. The natural circulation circuit is complemented by acquisition system of values of temperatures, flow and graphic interface. The instrumentation has thermocouples, volumetric flow meter, rotameter and high-speed digital camera. The experimental study is performed through analysis of information from measurements of temperatures at strategic points along the hydraulic circuit, besides natural circulation flow rates. The comparisons between analytical and experimental values are validated by viewing, recording and processing of the images for the flows patterns. Variables involved in the process of identification of flow regimes, dimensionless parameters, the phase velocity of the flow, initial boiling point, the phenomenon of 'flashing' pre-slug flow type were obtained experimentally. (author)

  1. Quantitative evaluation of papilledema from stereoscopic color fundus photographs.

    Science.gov (United States)

    Tang, Li; Kardon, Randy H; Wang, Jui-Kai; Garvin, Mona K; Lee, Kyungmoo; Abràmoff, Michael D

    2012-07-03

    To derive a computerized measurement of optic disc volume from digital stereoscopic fundus photographs for the purpose of diagnosing and managing papilledema. Twenty-nine pairs of stereoscopic fundus photographs and optic nerve head (ONH) centered spectral domain optical coherence tomography (SD-OCT) scans were obtained at the same visit in 15 patients with papilledema. Some patients were imaged at multiple visits in order to assess their changes. Three-dimensional shape of the ONH was estimated from stereo fundus photographs using an automated multi-scale stereo correspondence algorithm. We assessed the correlation of the stereo volume measurements with the SD-OCT volume measurements quantitatively, in terms of volume of retinal surface elevation above a reference plane and also to expert grading of papilledema from digital fundus photographs using the Frisén grading scale. The volumetric measurements of retinal surface elevation estimated from stereo fundus photographs and OCT scans were positively correlated (correlation coefficient r(2) = 0.60; P photographs compares favorably with that from OCT scans and with expert grading of papilledema severity. Stereoscopic color imaging of the ONH combined with a method of automated shape reconstruction is a low-cost alternative to SD-OCT scans that has potential for a more cost-effective diagnosis and management of papilledema in a telemedical setting. An automated three-dimensional image analysis method was validated that quantifies the retinal surface topography with an imaging modality that has lacked prior objective assessment.

  2. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    Science.gov (United States)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  3. [Fundus Autofluorescence Imaging].

    Science.gov (United States)

    Schmitz-Valckenberg, S

    2015-09-01

    Fundus autofluorescence (FAF) imaging allows for non-invasive mapping of changes at the level of the retinal pigment epithelium/photoreceptor complex and of alterations of macular pigment distribution. This imaging method is based on the visualisation of intrinsic fluorophores and may be easily and rapidly used in routine patient care. Main applications include degenerative disorders of the outer retina such as age-related macular degeneration, hereditary and acquired retinal diseases. FAF imaging is particularly helpful for differential diagnosis, detection and extent of involved retinal areas, structural-functional correlations and monitoring of changes over time. Recent developments include - in addition to the original application of short wavelength light for excitation ("blue" FAF imaging) - the use of other wavelength ranges ("green" or "near-infrared" FAF imaging), widefield imaging for visualisation of peripheral retinal areas and quantitative FAF imaging. Georg Thieme Verlag KG Stuttgart · New York.

  4. Fundus autofluorescence in serpiginouslike choroiditis.

    Science.gov (United States)

    Gupta, Amod; Bansal, Reema; Gupta, Vishali; Sharma, Aman

    2012-04-01

    To report the fundus autofluorescence characteristics in serpiginouslike choroiditis. Twenty-nine patients with presumed tubercular serpiginouslike choroiditis between November 2008 and January 2010 underwent fundus autofluorescence imaging during the acute stage and at regular intervals till the lesions healed. All patients received antitubercular therapy with oral corticosteroids. The autofluorescence images were compared with color fundus photography and fundus fluorescein angiography. The main outcome measure was fundus autofluorescence characteristics of lesions during the course of the disease. The pattern of fundus autofluorescence changed as the lesions evolved from the acute to the healed stage. In acute stage, the lesions showed an ill-defined halo of increased autofluorescence (hyperautofluorescence), giving it a diffuse, amorphous appearance (Stage I, acute). As the lesions began to heal, a thin rim of decreased autofluorescence (hypoautofluorescence) surrounded the lesion, defining its edges. The lesions showed predominantly hyperautofluorescence with stippled pattern (Stage II, subacute). With further healing, the hypoautofluorescence progressed and the lesion appeared predominantly hypoautofluorescent with stippled pattern (Stage III, nearly resolved). On complete healing, the lesions became uniformly hypoautofluorescent (Stage IV, completely resolved). Fundus autofluorescence highlighted the areas of disease activity and was a quick imaging tool for monitoring the course of lesions in serpiginouslike choroiditis.

  5. The analytic report of China digital camera market price trend Oct, 2004%2004年10月中国数码相机市场价格走势分析报告

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    The paper analyzed the trend of overall market price index,main-stream pixel product price trend of the 512 grade consumer digital camera and professional digital camera under our supervision. In addition, we illustrated the trend with the detailed graphs, we believed that the paper would bring a insight of the digital camera market for the enterprises, make the enterprises know more about the market direction and their market strategy.

  6. Comparison of optical coherence tomography and fundus photography for measuring the optic disc size.

    Science.gov (United States)

    Neubauer, Aljoscha S; Krieglstein, Tina R; Chryssafis, Christos; Thiel, Martin; Kampik, Anselm

    2006-01-01

    To assess the agreement and repeatability of optic nerve head (ONH) size measurements by optical coherence tomography (OCT) as compared to conventional planimetry of fundus photographs in normal eyes. For comparison with planimetry the absolute size of the ONH of 25 eyes from 25 normal subjects were measured by both OCT and digital fundus photography (Zeiss FF camera 450). Repeatability of automated Stratus OCT measurements were investigated by repeatedly measuring the optic disc in five normal subjects. Mean disc size was 1763 +/- 186 vertically and 1632 +/- 160 microm horizontally on planimetry. On OCT, values of 1772 +/- 317 microm vertically (p = 0.82) and a significantly smaller horizontal diameter of 1492 +/- 302 microm (p = 0.04) were obtained. The 95% limits of agreement were (-546 microm; +527 microm) for vertical and (-502 microm; +782 microm) for horizontal planimetric compared to OCT measurements. In some cases large discrepancies existed. Repeatability of automatic measurements of the optic disc by OCT was moderately good with intra-class correlation coefficients (ICC) of 0.78 horizontally and 0.83 vertically. The coefficient of repeatability indicating instrument precision was 80 microm for horizontal and 168 microm for vertical measurements. OCT can be used to determine optic disc margins in moderate agreement with planimetry in normal subjects. However, in some cases significant disagreement with photographic assessment may occur making manual inspection advisable. Automatic disc detection by OCT is moderately repeatable.

  7. AnimalCatcher: a digital camera to capture various reactions of animals

    OpenAIRE

    Tsukada, Koji; Oki, Maho; Kurihara, Kazutaka; Furudate, Yuko

    2015-01-01

    People often have difficulty to take pictures of animals, since animals usually do not react with cameras nor understand verbal directions. To solve this problem, we developed a new interaction technique, AnimalCatcher, which can attract animals' attention easily. The AnimalCatcher shoots various sounds using directional speaker to capture various reactions of animals. This paper describes concepts, implementation, and example pictures taken in a zoo.

  8. The use of a sky camera for solar radiation estimation based on digital image processing

    International Nuclear Information System (INIS)

    Alonso-Montesinos, J.; Batlles, F.J.

    2015-01-01

    The necessary search for a more sustainable global future means using renewable energy sources to generate pollutant-free electricity. CSP (Concentrated solar power) and PV (photovoltaic) plants are the systems most in demand for electricity production using solar radiation as the energy source. The main factors affecting final electricity generation in these plants are, among others, atmospheric conditions; therefore, knowing whether there will be any change in the solar radiation hitting the plant's solar field is of fundamental importance to CSP and PV plant operators in adapting the plant's operation mode to these fluctuations. Consequently, the most useful technology must involve the study of atmospheric conditions. This is the case for sky cameras, an emerging technology that allows one to gather sky information with optimal spatial and temporal resolution. Hence, in this work, a solar radiation estimation using sky camera images is presented for all sky conditions, where beam, diffuse and global solar radiation components are estimated in real-time as a novel way to evaluate the solar resource from a terrestrial viewpoint. - Highlights: • Using a sky camera, the solar resource has been estimated for one minute periods. • The sky images have been processed to estimate the solar radiation at pixel level. • The three radiation components have been estimated under all sky conditions. • Results have been presented for cloudless, partially-cloudy and overcast conditions. • For beam and global radiation, the nRMSE value is of about 11% under overcast skies.

  9. Shoaling behaviour of Lates japonicus revealed through a digital camera logger

    Directory of Open Access Journals (Sweden)

    Sara Gonzalvo

    2015-01-01

    Full Text Available Protecting endangered species is one of the main targets of conservation biology, but the study of these species is often a sensitive issue. The need to risk, and often take, the life of some specimens during the experiments is not easily justified. Technological advances provide scientists with tools that can reduce damage to studied species, while increasing the quality of the data obtained. Here, we analyse the social behaviour of an endangered Japanese fish, Akame (Lates japonicus, using an attached underwater camera. Social behaviour, especially concerning aggregations, is a key factor in conservation plans and fisheries management to avoid by-catch and to establish coherent protected areas. In this experiment, a fish-borne underwater still-camera logger was attached to a captured Akame, recording the individual in its natural environment in July, 2009. The images obtained from the camera revealed several groups of large adults moving together, showing for the first time in this species an aggregative behaviour. This discovery opens the door for initiation of protective measures to preserve these groups, which in turn, can help to ensure continuity of this fish in the Shimanto River by protecting the specific areas where these shoals gather.

  10. Fundus autofluorescence and colour fundus imaging compared during telemedicine screening in patients with diabetes.

    Science.gov (United States)

    Kolomeyer, Anton M; Baumrind, Benjamin R; Szirth, Bernard C; Shahid, Khadija; Khouri, Albert S

    2013-06-01

    We investigated the use of fundus autofluorescence (FAF) imaging in screening the eyes of patients with diabetes. Images were obtained from 50 patients with type 2 diabetes undergoing telemedicine screening with colour fundus imaging. The colour and FAF images were obtained with a 15.1 megapixel non-mydriatic retinal camera. Colour and FAF images were compared for pathology seen in nonproliferative and proliferative diabetic retinopathy (NPDR and PDR, respectively). A qualitative assessment was made of the ease of detecting early retinopathy changes and the extent of existing retinopathy. The mean age of the patients was 47 years, most were male (82%) and most were African American (68%). Their mean visual acuity was 20/45 and their mean intraocular pressure was 14.3 mm Hg. Thirty-eight eyes (76%) did not show any diabetic retinopathy changes on colour or FAF imaging. Seven patients (14%) met the criteria for NPDR and five (10%) for severe NPDR or PDR. The most common findings were microaneurysms, hard exudates and intra-retinal haemorrhages (IRH) (n = 6 for each). IRH, microaneurysms and chorioretinal scars were more easily visible on FAF images. Hard exudates, pre-retinal haemorrhage and fibrosis, macular oedema and Hollenhorst plaque were easier to identify on colour photographs. The value of FAF imaging as a complementary technique to colour fundus imaging in detecting diabetic retinopathy during ocular screening warrants further investigation.

  11. Adaptive optics fundus images of cone photoreceptors in the macula of patients with retinitis pigmentosa

    Directory of Open Access Journals (Sweden)

    Tojo N

    2013-01-01

    Full Text Available Naoki Tojo, Tomoko Nakamura, Chiharu Fuchizawa, Toshihiko Oiwake, Atsushi HayashiDepartment of Ophthalmology, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, JapanBackground: The purpose of this study was to examine cone photoreceptors in the macula of patients with retinitis pigmentosa using an adaptive optics fundus camera and to investigate any correlations between cone photoreceptor density and findings on optical coherence tomography and fundus autofluorescence.Methods: We examined two patients with typical retinitis pigmentosa who underwent ophthalmological examination, including measurement of visual acuity, and gathering of electroretinographic, optical coherence tomographic, fundus autofluorescent, and adaptive optics fundus images. The cone photoreceptors in the adaptive optics images of the two patients with retinitis pigmentosa and five healthy subjects were analyzed.Results: An abnormal parafoveal ring of high-density fundus autofluorescence was observed in the macula in both patients. The border of the ring corresponded to the border of the external limiting membrane and the inner segment and outer segment line in the optical coherence tomographic images. Cone photoreceptors at the abnormal parafoveal ring were blurred and decreased in the adaptive optics images. The blurred area corresponded to the abnormal parafoveal ring in the fundus autofluorescence images. Cone densities were low at the blurred areas and at the nasal and temporal retina along a line from the fovea compared with those of healthy controls. The results for cone spacing and Voronoi domains in the macula corresponded with those for the cone densities.Conclusion: Cone densities were heavily decreased in the macula, especially at the parafoveal ring on high-density fundus autofluorescence in both patients with retinitis pigmentosa. Adaptive optics images enabled us to observe in vivo changes in the cone photoreceptors of

  12. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  13. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Ahmad Audi

    2017-07-01

    Full Text Available Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique camera, which has an IMU (Inertial Measurement Unit sensor and an SoC (System on Chip/FPGA (Field-Programmable Gate Array. To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  14. IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS

    Directory of Open Access Journals (Sweden)

    A. Audi

    2017-08-01

    Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation

  15. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    Science.gov (United States)

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2017-06-01

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  16. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    Science.gov (United States)

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than

  17. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  18. EDUCATING THE PEOPLE AS A DIGITAL PHOTOGRAPHER AND CAMERA OPERATOR VIA OPEN EDUCATION SYSTEM STUDIES FROM TURKEY: Anadolu University Open Education Faculty Case

    Directory of Open Access Journals (Sweden)

    Huseyin ERYILMAZ

    2010-04-01

    Full Text Available Today, Photography and visual arts are very important in our modern life. Especially for the mass communication, the visual images and visual arts have very big importance. In modern societies, people must have knowledge about the visual things, such as photographs, cartoons, drawings, typography, etc. Briefly, the people need education on visual literacy.In today’s world, most of the people in the world have a digital camera for photography or video image. But it is not possible to give people, visual literacy education in classic school system. But the camera users need a teaching medium for using their cameras effectively. So they are trying to use internet opportunities, some internet websites and pages as an information source. But as the well known problem, not all the websites give the correct learning or know-how on internet. There are a lot of mistakes and false information. Because of the reasons given above, Anadolu University Open Education Faculty is starting a new education system to educate people as a digital photographer and camera person in 2009. This program has very importance as a case study. The language of photography and digital technology is in English. Of course, not all the camera users understand English language. So, owing to this program, most of the camera users and especially people who is working as an operator in studios will learn a lot of things on photography, digital technology and camera systems. On the other hand, these people will learn about composition, visual image's history etc. Because of these reasons, this program is very important especially for developing countries. This paper will try to discuss this subject.

  19. Digital camera image analysis of faeces in detection of cholestatic jaundice in infants

    OpenAIRE

    Parinya Parinyanut; Tai Bandisak; Piyawan Chiengkriwate; Sawit Tanthanuch; Surasak Sangkhathat

    2016-01-01

    Background: Stool colour assessment is a screening method for biliary tract obstruction in infants. This study is aimed to be a proof of concept work of digital photograph image analysis of stool colour compared to colour grading by a colour card, and the stool bilirubin level test. Materials and Methods: The total bilirubin (TB) level contents in stool samples from 17 infants aged less than 1 year, seven with confirmed cholestatic jaundice and ten healthy subjects was measured, and outcome c...

  20. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    Science.gov (United States)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  1. Software development and its description for Geoid determination based on Spherical-Cap-Harmonics Modelling using digital-zenith camera and gravimetric measurements hybrid data

    Science.gov (United States)

    Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.

    2017-10-01

    Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.

  2. Research on Deep Joints and Lode Extension Based on Digital Borehole Camera Technology

    Directory of Open Access Journals (Sweden)

    Han Zengqiang

    2015-09-01

    Full Text Available Structure characteristics of rock and orebody in deep borehole are obtained by borehole camera technology. By investigating on the joints and fissures in Shapinggou molybdenum mine, the dominant orientation of joint fissure in surrounding rock and orebody were statistically analyzed. Applying the theory of metallogeny and geostatistics, the relationship between joint fissure and lode’s extension direction is explored. The results indicate that joints in the orebody of ZK61borehole have only one dominant orientation SE126° ∠68°, however, the dominant orientations of joints in surrounding rock were SE118° ∠73°, SW225° ∠70° and SE122° ∠65°, NE79° ∠63°. Then a preliminary conclusion showed that the lode’s extension direction is specific and it is influenced by joints of surrounding rock. Results of other boreholes are generally agree well with the ZK61, suggesting the analysis reliably reflects the lode’s extension properties and the conclusion presents important references for deep ore prospecting.

  3. Determining degree of optic nerve edema from color fundus photography

    Science.gov (United States)

    Agne, Jason; Wang, Jui-Kai; Kardon, Randy H.; Garvin, Mona K.

    2015-03-01

    Swelling of the optic nerve head (ONH) is subjectively assessed by clinicians using the Frisén scale. It is believed that a direct measurement of the ONH volume would serve as a better representation of the swelling. However, a direct measurement requires optic nerve imaging with spectral domain optical coherence tomography (SD-OCT) and 3D segmentation of the resulting images, which is not always available during clinical evaluation. Furthermore, telemedical imaging of the eye at remote locations is more feasible with non-mydriatic fundus cameras which are less costly than OCT imagers. Therefore, there is a critical need to develop a more quantitative analysis of optic nerve swelling on a continuous scale, similar to SD-OCT. Here, we select features from more commonly available 2D fundus images and use them to predict ONH volume. Twenty-six features were extracted from each of 48 color fundus images. The features include attributes of the blood vessels, optic nerve head, and peripapillary retina areas. These features were used in a regression analysis to predict ONH volume, as computed by a segmentation of the SD-OCT image. The results of the regression analysis yielded a mean square error of 2.43 mm3 and a correlation coefficient between computed and predicted volumes of R = 0:771, which suggests that ONH volume may be predicted from fundus features alone.

  4. Quality assurance in proton beam therapy using a plastic scintillator and a commercially available digital camera.

    Science.gov (United States)

    Almurayshid, Mansour; Helo, Yusuf; Kacperek, Andrzej; Griffiths, Jennifer; Hebden, Jem; Gibson, Adam

    2017-09-01

    In this article, we evaluate a plastic scintillation detector system for quality assurance in proton therapy using a BC-408 plastic scintillator, a commercial camera, and a computer. The basic characteristics of the system were assessed in a series of proton irradiations. The reproducibility and response to changes of dose, dose-rate, and proton energy were determined. Photographs of the scintillation light distributions were acquired, and compared with Geant4 Monte Carlo simulations and with depth-dose curves measured with an ionization chamber. A quenching effect was observed at the Bragg peak of the 60 MeV proton beam where less light was produced than expected. We developed an approach using Birks equation to correct for this quenching. We simulated the linear energy transfer (LET) as a function of depth in Geant4 and found Birks constant by comparing the calculated LET and measured scintillation light distribution. We then used the derived value of Birks constant to correct the measured scintillation light distribution for quenching using Geant4. The corrected light output from the scintillator increased linearly with dose. The system is stable and offers short-term reproducibility to within 0.80%. No dose rate dependency was observed in this work. This approach offers an effective way to correct for quenching, and could provide a method for rapid, convenient, routine quality assurance for clinical proton beams. Furthermore, the system has the advantage of providing 2D visualization of individual radiation fields, with potential application for quality assurance of complex, time-varying fields. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Noninvasive imaging of human skin hemodynamics using a digital red-green-blue camera

    Science.gov (United States)

    Nishidate, Izumi; Tanaka, Noriyuki; Kawase, Tatsuya; Maeda, Takaaki; Yuasa, Tomonori; Aizu, Yoshihisa; Yuasa, Tetsuya; Niizeki, Kyuichi

    2011-08-01

    In order to visualize human skin hemodynamics, we investigated a method that is specifically developed for the visualization of concentrations of oxygenated blood, deoxygenated blood, and melanin in skin tissue from digital RGB color images. Images of total blood concentration and oxygen saturation can also be reconstructed from the results of oxygenated and deoxygenated blood. Experiments using tissue-like agar gel phantoms demonstrated the ability of the developed method to quantitatively visualize the transition from an oxygenated blood to a deoxygenated blood in dermis. In vivo imaging of the chromophore concentrations and tissue oxygen saturation in the skin of the human hand are performed for 14 subjects during upper limb occlusion at 50 and 250 mm Hg. The response of the total blood concentration in the skin acquired by this method and forearm volume changes obtained from the conventional strain-gauge plethysmograph were comparable during the upper arm occlusion at pressures of both 50 and 250 mm Hg. The results presented in the present paper indicate the possibility of visualizing the hemodynamics of subsurface skin tissue.

  6. Opinion and Special Articles: Amateur fundus photography with various new devices: Our experience as neurology residents.

    Science.gov (United States)

    Zafar, Saman; Cardenas, Ylec Mariana; Leishangthem, Lakshmi; Yaddanapudi, Sridhara

    2018-05-08

    Times are changing in the way we secure and share patient fundus photographs to enhance our diagnostic skills in neurology. At the recent American Academy of Neurology meeting, the use of a fundus camera and smartphones to secure good-quality fundus photographs of patients presenting with headache to the emergency department (ED) was presented. We were enthusiastic to replicate the success of the Fundus Photography vs Ophthalmoscopy Trial Outcomes in the Emergency Department (FOTO-ED) study in our neurology department, but encountered problems in terms of cost, setup, feasibility, and portability of the device. As neurology residents, we came up with 3 easier options. We present these 3 options as our personal experience, and hope to reignite enthusiasm among neurology trainees to find their own means of performing ophthalmoscopy routinely in the hospital, as it appears that the Internet market is now thriving with many other devices to make this examination easier and more rewarding. Of the options explored above, the Handheld Fundus Camera was a clear favorite among the residents, and we have placed one in our call room for routine use. It travels to the clinic, floor, intensive care unit, and ED when needed. It has enhanced the way we approach the fundus examination and been a fun skill to acquire. We look forward to further advances that will make it possible to carry such a device in a physician's pocket. © 2018 American Academy of Neurology.

  7. Thermal diffusivity measurement in thin metallic filaments using the mirage method with multiple probe beams and a digital camera

    Science.gov (United States)

    Vargas, E.; Cifuentes, A.; Alvarado, S.; Cabrera, H.; Delgado, O.; Calderón, A.; Marín, E.

    2018-02-01

    Photothermal beam deflection is a well-established technique for measuring thermal diffusivity. In this technique, a pump laser beam generates temperature variations on the surface of the sample to be studied. These variations transfer heat to the surrounding medium, which may be air or any other fluid. The medium in turn experiences a change in the refractive index, which will be proportional to the temperature field on the sample surface when the distance to this surface is small. A probe laser beam will suffer a deflection due to the refractive index periodical changes, which is usually monitored by means of a quadrant photodetector or a similar device aided by lock-in amplification. A linear relationship that arises in this technique is that given by the phase lag of the thermal wave as a function of the distance to a punctual heat source when unidimensional heat diffusion can be guaranteed. This relationship is useful in the calculation of the sample's thermal diffusivity, which can be obtained straightforwardly by the so-called slope method, if the pump beam modulation frequency is well-known. The measurement procedure requires the experimenter to displace the probe beam at a given distance from the heat source, measure the phase lag at that offset, and repeat this for as many points as desired. This process can be quite lengthy in dependence of the number points. In this paper, we propose a detection scheme, which overcomes this limitation and simplifies the experimental setup using a digital camera that substitutes all detection hardware utilizing motion detection techniques and software digital signal lock-in post-processing. In this work, the method is demonstrated using thin metallic filaments as samples.

  8. Infrared-laser-based fundus angiography

    Science.gov (United States)

    Klingbeil, Ulrich; Canter, Joseph M.; Lesiecki, Michael L.; Reichel, Elias

    1994-06-01

    Infrared fundus angiography, using the fluorescent dye indocyanine green (ICG), has shown great potential in delineating choroidal neovascularization (CNV) otherwise not detectable. A digital retinal imaging system containing a diode laser for illumination has been developed and optimized to perform high sensitivity ICG angiography. The system requires less power and generates less pseudo-fluorescence background than nonlaser devices. During clinical evaluation at three retinal centers more than 200 patients, the majority of which had age-related macular degeneration, were analyzed. Laser based ICG angiography was successful in outlining many of the ill-defined or obscure CNV as defined by fluorescein angiography. The procedure was not as successful with classic CNV. ICG angiograms were used to prepare and guide laser treatment.

  9. A Design of Real-time Automatic Focusing System for Digital Still Camera Using the Passive Sensor Error Minimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.S. [Samsung Techwin Co., Ltd., Seoul (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [University of Seoul, Seoul (Korea)

    2002-05-01

    In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In addition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds. (author). 9 refs., 11 figs., 5 tabs.

  10. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  11. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    Science.gov (United States)

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  12. Ocular Fundus Photography as an Educational Tool.

    Science.gov (United States)

    Mackay, Devin D; Garza, Philip S

    2015-10-01

    The proficiency of nonophthalmologists with direct ophthalmoscopy is poor, which has prompted a search for alternative technologies to examine the ocular fundus. Although ocular fundus photography has existed for decades, its use has been traditionally restricted to ophthalmology clinical care settings and textbooks. Recent research has shown a role for nonmydriatic fundus photography in nonophthalmic settings, encouraging more widespread adoption of fundus photography technology. Recent studies have also affirmed the role of fundus photography as an adjunct or alternative to direct ophthalmoscopy in undergraduate medical education. In this review, the authors examine the use of ocular fundus photography as an educational tool and suggest future applications for this important technology. Novel applications of fundus photography as an educational tool have the potential to resurrect the dying art of funduscopy. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Analysis of the performance and usefulness of a new digital gamma camera and short-lived radionuclide Au-195m for cardiac studies

    International Nuclear Information System (INIS)

    Mena, I.; Amaral, H.; Adams, R.; Garcia, E.; de Jong, R.

    1983-01-01

    The purpose of this investigation is to evaluate at LAC Harbor-UCLA Medical Center dead time characteristics of the digital camera Elscint (Apex 415), designed by the manufacturer to improve the count rate performance, its resolution and clinical applications in first pass ventriculography. The Hg-195m/Au-195m generator is evaluated for Hg-195 leakage, dosimetry and clinical usefulness. The high count rate capability of the digital camera is maximally utilized in biplane ventriculography which may become the optimal modality for dynamic cardiovascular nuclear medicine studies. The resolution achieved in this high count rate modality appears perfectly adequate for high quality studies. The imaging and radiation exposure characteristics of the 30.5 sec half life Au 195m are promising of a significant clinical contribution in the field of adult and pediatric cardiology

  14. Waste reduction efforts through the evaluation and procurement of a digital camera system for the Alpha-Gamma Hot Cell Facility at Argonne National Laboratory-East

    International Nuclear Information System (INIS)

    Bray, T. S.; Cohen, A. B.; Tsai, H.; Kettman, W. C.; Trychta, K.

    1999-01-01

    The Alpha-Gamma Hot Cell Facility (AGHCF) at Argonne National Laboratory-East is a research facility where sample examinations involve traditional photography. The AGHCF documents samples with photographs (both Polaroid self-developing and negative film). Wastes generated include developing chemicals. The AGHCF evaluated, procured, and installed a digital camera system for the Leitz metallograph to significantly reduce labor, supplies, and wastes associated with traditional photography with a return on investment of less than two years

  15. Fundus Otoflöresans

    OpenAIRE

    ŞERMET, Figen

    2017-01-01

    Fundus otoflöresans görüntüleme, retina hastalıklarında patofizyolojik mekanizmaların anlaşılması, tanı, fenotipgenotipkorelasyonu, hastalık progresyonunu etkileyen faktörlerin belirlenmesi ve tedavi monitörizasyonunda yararlı birgörüntüleme yöntemidir. Fundus otoflöresans görüntüleme birçok hastalığın tanı ve takibinde kullanılabilir. Bu derlemedefundus otoflöresans görüntülemenin retina hastalıkları ve ayırıcı tanısında önemli rol oynadığı hastalıklar klinik olgufotoğraflarıyla da desteklen...

  16. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  17. Fundus autofluorescence: applications and perspectives.

    Science.gov (United States)

    Cuba, J; Gómez-Ulla, F

    2013-02-01

    To describe the findings of the study of autofluorescence of the different retinal diseases included in the study. To determine in which diseases autofluorescence may be more, or just as, useful as fluorescein angiography (FAG) in terms of diagnostic information. We studied the retinal autofluorescence of 123 eyes of 93 patients, including various diseases of the eye fundus. In all cases we explored the fundus, retinal autofluorescence, and, if indicated, FAG was performed. Analysis of the autofluorescence was performed using the Heidelberg Retina angiography Angiograph 2 (HRA2) Heidelberg Engineering (Germany). The autofluorescence information provided was equal or better (than FAG) in: 68.18% of cases of macular edema, 50% of pigment epithelium detachments, 100% of pigment epithelium atrophies, 100% of central serous chorioretinopathy; 55.55% of choroidal neovascularization, 100% of retinal dystrophies with deposition of lipofuscin, 100% of hard exudates and pre-retinal hemorrhages. Autofluorescence is a quick and non-invasive examination method, comfortable for both patient and examiner, and with a very short learning curve. It provides diagnostic information about many eye fundus diseases. While more studies and more experience with its use are needed, its interest lies in the possibility of avoiding the performing of angiography in patients with these diseases, and in the additional information autofluorescence provides about the functional situation of cells and retinal pigments. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.

  18. Application of smart phone and supporting set for fundus imaging in primary hospital of rural area

    Directory of Open Access Journals (Sweden)

    Yong-Feng Jing

    2018-01-01

    Full Text Available AIM: To describe the application of smart phone and supporting set for acquiring fundus images with slitlamp examination and non-contact lens in primary hospital of the rural area. METHODS: The supporting set for smart phone was purchased from taobao and securely connected to the ocular lens of slitlamp microscopy. The fundus photos were imaged with assistance of non-contact slitlamp lens from Volk. RESULTS: High quality images of various retinal diseases could be successfully taken with smart phone and supporting set by slitlamp examination. The fundus images were send to patients with Wechat as medical records or used for telconsultant. CONCLUSION: High resolution smart phones are wildly used nowadays and supporting sets are very accessible; thus high quality of images could be obtained with minimal cost in rural hospitals. The digital fundus images will be beneficial for medical record and rapid diagnosis with telconsultant.

  19. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    Science.gov (United States)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  20. How to optimize radiological images captured from digital cameras, using the Adobe Photoshop 6.0 program.

    Science.gov (United States)

    Chalazonitis, A N; Koumarianos, D; Tzovara, J; Chronopoulos, P

    2003-06-01

    Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.

  1. Grading of 7-standard field fundus photography for diagosis of diabetic retinopathy%七方位彩色眼底照像法对糖尿病视网膜病变的诊断价值

    Institute of Scientific and Technical Information of China (English)

    赵琳; 刘瑜玲; 鹿欣荣; 钱芳; 张敏

    2009-01-01

    Objeetive To evaluate the ability ofmydriatic single-field 50°digital fundus photography and mydriatic 7-standard field fundus photography to detect diabetic retinopathy (DR)in patients with diabetic mellitus (DM)compared with fluorescein fundus angiography (FFA).Methods A Total of 164 diabetic patients attending a hospital-based screening clinic were recruited.Mydriatie single-field 50°digital fundus photography,mydriatic 7-standard field ETDRS fundus photography and FFA were performed using a Zeiss ff450 plus IR fundus camera.Grading of diabetic retinopathy was carried out by two ophthalmologists with diabetic retinopathy international clinical classification.The results were evaluated and compared respectively.ResuitsMydriatic single-field 50°digital fundus photography and FFA demonstrated moderate agreement (k=0.488).Mydriatic 7-standard field fundns photography and FFA demonstrated near perfect agreement (k=0.873).Conclusion Mydriatic 7-standard field fundus photography is an effective and clinically viable technique to screen for diabetic retinopathy among a diabetic population,Mydriatic single-field 50°digital fundus photography moderate.%目的 筛选经济、简便、有效地糖尿病视网膜病变检查法.方法 分别采用散瞳50度眼底后极部彩色照像法、美国早期糖尿病性视网膜病变治疗研究小组确立的散瞳30度眼底七方位彩色照像法和FFA法对164例(308只眼)糖尿病患者进行检查,由有经验的眼底医生按照糖尿病视网膜病变国际临床分类法对图片进行分析给出分期,对三种方法所得结果进行比较.结果 散瞳50度眼底后极部彩色照像法和FFA法在DR诊断分期中一致性一般(k=0.488),七方位彩色眼底照像法和FFA法在DR诊断分期中有比较好的一致性(k=0.873).结论 七方位彩色眼底照像法可作为糖尿病视网膜病变筛查和指导治疗的一种比较可靠的方法,而散瞳50度眼底后极部彩色照像法的可靠性一般.

  2. Adaptive optics fundus images of cone photoreceptors in the macula of patients with retinitis pigmentosa.

    Science.gov (United States)

    Tojo, Naoki; Nakamura, Tomoko; Fuchizawa, Chiharu; Oiwake, Toshihiko; Hayashi, Atsushi

    2013-01-01

    The purpose of this study was to examine cone photoreceptors in the macula of patients with retinitis pigmentosa using an adaptive optics fundus camera and to investigate any correlations between cone photoreceptor density and findings on optical coherence tomography and fundus autofluorescence. We examined two patients with typical retinitis pigmentosa who underwent ophthalmological examination, including measurement of visual acuity, and gathering of electroretinographic, optical coherence tomographic, fundus autofluorescent, and adaptive optics fundus images. The cone photoreceptors in the adaptive optics images of the two patients with retinitis pigmentosa and five healthy subjects were analyzed. An abnormal parafoveal ring of high-density fundus autofluorescence was observed in the macula in both patients. The border of the ring corresponded to the border of the external limiting membrane and the inner segment and outer segment line in the optical coherence tomographic images. Cone photoreceptors at the abnormal parafoveal ring were blurred and decreased in the adaptive optics images. The blurred area corresponded to the abnormal parafoveal ring in the fundus autofluorescence images. Cone densities were low at the blurred areas and at the nasal and temporal retina along a line from the fovea compared with those of healthy controls. The results for cone spacing and Voronoi domains in the macula corresponded with those for the cone densities. Cone densities were heavily decreased in the macula, especially at the parafoveal ring on high-density fundus autofluorescence in both patients with retinitis pigmentosa. Adaptive optics images enabled us to observe in vivo changes in the cone photoreceptors of patients with retinitis pigmentosa, which corresponded to changes in the optical coherence tomographic and fundus autofluorescence images.

  3. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  4. The in vitro and in vivo validation of a mobile non-contact camera-based digital imaging system for tooth colour measurement.

    Science.gov (United States)

    Smith, Richard N; Collins, Luisa Z; Naeeni, Mojgan; Joiner, Andrew; Philpotts, Carole J; Hopkinson, Ian; Jones, Clare; Lath, Darren L; Coxon, Thomas; Hibbard, James; Brook, Alan H

    2008-01-01

    To assess the reproducibility of a mobile non-contact camera-based digital imaging system (DIS) for measuring tooth colour under in vitro and in vivo conditions. One in vitro and two in vivo studies were performed using a mobile non-contact camera-based digital imaging system. In vitro study: two operators used the DIS to image 10 dry tooth specimens in a randomised order on three occasions. In vivo study 1:25 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS on four consecutive days by one operator to measure day-to-day variability. On one of the four test days, duplicate images were collected by three different operators to measure inter- and intra-operator variability. In vivo study 2:11 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS twice daily over three days within the same week to assess day-to-day variability. Three operators collected images from subjects in a randomised order to measure inter- and intra-operator variability. Subject-to-subject variability was the largest source of variation within the data. Pairwise correlations and concordance coefficients were > 0.7 for each operator, demonstrating good precision and excellent operator agreement in each of the studies. Intraclass correlation coefficients (ICCs) for each operator indicate that day-to-day reliability was good to excellent, where all ICC's where > 0.75 for each operator. The mobile non-contact camera-based digital imaging system was shown to be a reproducible means of measuring tooth colour in both in vitro and in vivo experiments.

  5. Fundus imaging with a nasal endoscope

    Directory of Open Access Journals (Sweden)

    P Mahesh Shanmugam

    2015-01-01

    Full Text Available Wide field fundus imaging is needed to diagnose, treat, and follow-up patients with retinal pathology. This is more applicable for pediatric patients as repeated evaluation is a challenge. The presently available imaging machines though provide high definition images, but carry the obvious disadvantages of either being costly or bulky or sometimes both, which limits its usage only to large centers. We hereby report a technique of fundus imaging using a nasal endoscope coupled with viscoelastic. A regular nasal endoscope with viscoelastic coupling was placed on the cornea to image the fundus of infants under general anesthesia. Wide angle fundus images of various fundus pathologies in infants could be obtained easily with readily available instruments and without the much financial investment for the institutes.

  6. Measurement of Young’s modulus and Poisson’s ratio of metals by means of ESPI using a digital camera

    Science.gov (United States)

    Francisco, J. B. Pascual; Michtchenko, A.; Barragán Pérez, O.; Susarrey Huerta, O.

    2016-09-01

    In this paper, mechanical experiments with a low-cost interferometry set-up are presented. The set-up is suitable for an undergraduate laboratory where optical equipment is absent. The arrangement consists of two planes of illumination, allowing the measurement of the two perpendicular in-plane displacement directions. An axial load was applied on three different metals, and the longitudinal and transversal displacements were measured sequentially. A digital camera was used to acquire the images of the different states of load of the illuminated area. A personal computer was used to perform the digital subtraction of the images to obtain the fringe correlations, which are needed to calculate the displacements. Finally, Young’s modulus and Poisson’s ratio of the metals were calculated using the displacement data.

  7. Automated detection of fundus photographic red lesions in diabetic retinopathy.

    Science.gov (United States)

    Larsen, Michael; Godt, Jannik; Larsen, Nicolai; Lund-Andersen, Henrik; Sjølie, Anne Katrin; Agardh, Elisabet; Kalm, Helle; Grunkin, Michael; Owens, David R

    2003-02-01

    To compare a fundus image-analysis algorithm for automated detection of hemorrhages and microaneurysms with visual detection of retinopathy in patients with diabetes. Four hundred fundus photographs (35-mm color transparencies) were obtained in 200 eyes of 100 patients with diabetes who were randomly selected from the Welsh Community Diabetic Retinopathy Study. A gold standard reference was defined by classifying each patient as having or not having diabetic retinopathy based on overall visual grading of the digitized transparencies. A single-lesion visual grading was made independently, comprising meticulous outlining of all single lesions in all photographs and used to develop the automated red lesion detection system. A comparison of visual and automated single-lesion detection in replicating the overall visual grading was then performed. Automated red lesion detection demonstrated a specificity of 71.4% and a resulting sensitivity of 96.7% in detecting diabetic retinopathy when applied at a tentative threshold setting for use in diabetic retinopathy screening. The accuracy of 79% could be raised to 85% by adjustment of a single user-supplied parameter determining the balance between the screening priorities, for which a considerable range of options was demonstrated by the receiver-operating characteristic (area under the curve 90.3%). The agreement of automated lesion detection with overall visual grading (0.659) was comparable to the mean agreement of six ophthalmologists (0.648). Detection of diabetic retinopathy by automated detection of single fundus lesions can be achieved with a performance comparable to that of experienced ophthalmologists. The results warrant further investigation of automated fundus image analysis as a tool for diabetic retinopathy screening.

  8. Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image Recuperation Approach

    Directory of Open Access Journals (Sweden)

    K. Somasundaram

    2015-01-01

    Full Text Available Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED and Optimally Adjusted Morphological Operator (OAMO effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  9. Diagnosing and ranking retinopathy disease level using diabetic fundus image recuperation approach.

    Science.gov (United States)

    Somasundaram, K; Rajendran, P Alli

    2015-01-01

    Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  10. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    Directory of Open Access Journals (Sweden)

    Leitritz MA

    2014-07-01

    Full Text Available Martin Alexander Leitritz, Focke Ziemssen, Karl Ulrich Bartz-Schmidt, Bogomil Voykov Centre for Ophthalmology, University Eye Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany Purpose: To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods: A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ.Results: Two eyes from each of five patients (median age 32 years, range 28–45 years without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were -0.32 mm (range -0.69 to 0.024 and 0.175 mm (range -0.37 to 0.45, respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84. There was a slight positive corrlation (r=0.39, P<0.001 between the grade of deviation in the primary position and the distance increase triggered by movements.Conclusion: With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements

  11. Accuracy assessment of digital surface models based on a small format action camera in a North-East Hungarian sample area

    Directory of Open Access Journals (Sweden)

    Barkóczi Norbert

    2017-01-01

    Full Text Available The use of the small format digital action cameras has been increased in the past few years in various applications, due to their low budget cost, flexibility and reliability. We can mount these small cameras on several devices, like unmanned air vehicles (UAV and create 3D models with photogrammetric technique. Either creating or receiving these kind of databases, one of the most important questions will always be that how accurate these systems are, what the accuracy that can be achieved is. We gathered the overlapping images, created point clouds, and then we generated 21 different digital surface models (DSM. The differences based on the number of images we used in each model, and on the flight height. We repeated the flights three times, to compare the same models with each other. Besides, we measured 129 reference points with RTK-GPS, to compare the height differences with the extracted cell values from each DSM. The results showed that higher flight height has lower errors, and the optimal air base distance is one fourth of the flying height in both cases. The lowest median was 0.08 meter, at the 180 meter flight, 50 meter air base distance model. Raising the number of images does not increase the overall accuracy. The connection between the amount of error and distance from the nearest GCP is not linear in every case.

  12. Fundus Findings in Wernicke Encephalopathy

    Directory of Open Access Journals (Sweden)

    Tal Serlin

    2017-07-01

    Full Text Available Wernicke encephalopathy (WE is an acute neuropsychiatric syndrome resulting from thiamine (vitamin B1 deficiency, classically characterized by the triad of ophthalmoplegia, confusion, and ataxia. While commonly associated with chronic alcoholism, WE may also occur in the setting of poor nutrition or absorption. We present a 37-year-old woman who underwent laparoscopic sleeve gastrectomy and presented with visual disturbance with bilateral horizontal nystagmus, confusion, and postural imbalance. Fundus examination revealed bilateral optic disc edema with a retinal hemorrhage in the left eye. Metabolic workup demonstrated thiamine deficiency. Her symptoms resolved after thiamine treatment. This case raises the awareness of the possibility of posterior segment findings in WE, which are underreported in WE.

  13. Quantitative Fundus Autofluorescence in Recessive Stargardt Disease

    OpenAIRE

    Burke, Tomas R.; Duncker, Tobias; Woods, Russell L.; Greenberg, Jonathan P.; Zernant, Jana; Tsang, Stephen H.; Smith, R. Theodore; Allikmets, Rando; Sparrow, Janet R.; Delori, François C.

    2014-01-01

    Quantitative fundus autofluorescence (qAF) is significantly increased in Stargardt disease, consistent with previous reports of increased RPE lipofuscin. QAF will help to establish genotype-phenotype correlations and may serve as an outcome measure in clinical trials.

  14. [Fundus autofluorescence in dry AMD - impact on disease progression].

    Science.gov (United States)

    Vidinova, C N; Gouguchkova, P T; Vidinov, K N

    2013-11-01

    Fundus autofluorescence is a novel technique that gives us information about the RPE cells by evaluating the distribution of lipofuscin in the retina. The purpose of our study was to evaluate the diagnostic abilities of OCT, RTVue and fundus autofluorescence in predicting the progression of dry AMD. In our study 37 dry AMD patients were enrolled: 22 of them with druses and 15 with developed geographic atrophy. They all underwent complete ophthalmological examinations including OCT and autofluorescence. We used the RTVue OCT programmes HD line, Cross line, EMM5 and EMM5 progression in all cases. The autofluorescence was recorded with the help of the Canon CX1 fundus camera. OCT images in the AMD patients with dry AMD and large druses showed typical undulations in the RPE/choroid line and occasionally drusenoid detachment of the RPE. Autofluorescence showed different patterns. The confluent reticular autofluorescence was associated with the development of neovascular membranes. In geographic atrophy patient OCTs showed diminished retinal thickness measured with EMM5. On autofluorescence the findings at the border zone atrophic/normal retina were of particular importance. The diffuse increased autofluorescence in that area was considered to be a sign for further atrophy progression. Our results point out that OCT in combination with autofluorescence is important in following the progression of dry AMD. Pathological autofluorescence at the border of atrophic lesions is an important sign for disease activity. Although both OCT and autofluorescence visualise the changes in RPE, autofluorescence is of key importance in predicting the development of the disease. Georg Thieme Verlag KG Stuttgart · New York.

  15. MEASUREMENT OF LARGE-SCALE SOLAR POWER PLANT BY USING IMAGES ACQUIRED BY NON-METRIC DIGITAL CAMERA ON BOARD UAV

    Directory of Open Access Journals (Sweden)

    R. Matsuoka

    2012-07-01

    Full Text Available This paper reports an experiment conducted in order to investigate the feasibility of the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a non-metric digital camera on board a micro unmanned aerial vehicle (UAV. It is required that a root mean squares of errors (RMSE in height measurement should be less than 26 mm that is 1/3 of the critical limit of deformation of 78 mm off the plane of a solar panel. Images utilized in the experiment have been obtained by an Olympus PEN E-P2 digital camera on board a Microdrones md4-1000 quadrocopter. The planned forward and side overlap ratios of vertical image acquisition have been 60 % and 60 % respectively. The planned flying height of the UAV has been 20 m above the ground level and the ground resolution of an image is approximately 5.0 mm by 5.0 mm. 8 control points around the experiment area are utilized for orientation. Measurement results are evaluated by the space coordinates of 220 check points which are corner points of 55 solar panels selected from 1768 solar panels in the experiment area. Two teams engage in the experiment. One carries out orientation and measurement by using 171 images following the procedure of conventional aerial photogrammetry, and the other executes those by using 126 images in the manner of close range photogrammetry. The former fails to satisfy the required accuracy, while the RMSE in height measurement by the latter is 8.7 mm that satisfies the required accuracy. From the experiment results, we conclude that the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a nonmetric digital camera on board a micro UAV would be feasible if points utilized in orientation and measurement have a sufficient number of bundles in good geometry and self-calibration in orientation is carried out.

  16. Investigating the influence of chromatic aberration and optical illumination bandwidth on fundus imaging in rats

    Science.gov (United States)

    Li, Hao; Liu, Wenzhong; Zhang, Hao F.

    2015-10-01

    Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.

  17. The grey fovea sign of macular oedema or subfoveal fluid on non-stereoscopic fundus photographs

    DEFF Research Database (Denmark)

    Hasler, Pascal W; Soliman, Wael; Sander, Birgit

    2017-01-01

    PURPOSE: To describe the grey fovea sign of fovea-involving macular oedema or subretinal fluid accumulation in red-free fundus photography. METHODS: A test set of 91 digital fundus photographs of good quality from 100 consecutive eyes in 72 patients with diabetic retinopathy or central serous...... chorioretinopathy was composed by one of the investigators and evaluated by four masked observers. The photographs were graded as to whether a normal dark fovea was present or absent. The reference method was foveal thickness measurement using optical coherence tomography (OCT). RESULTS: Eyes graded as having...... a grey fovea on fundus photographs (n = 67) had a median foveal thickness of 279 μm (interquartile range 130 μm), whereas eyes graded as having a normal dark fovea (n = 24) had a median foveal thickness of 238 μm (interquartile range 44.5 μm, p = 0.025). CONCLUSION: The absence of a dark fovea on red...

  18. Fundus Autofluorescence Imaging in an Ocular Screening Program

    Directory of Open Access Journals (Sweden)

    A. M. Kolomeyer

    2012-01-01

    Full Text Available Purpose. To describe integration of fundus autofluorescence (FAF imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA, intraocular pressure (IOP, and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18% individuals with diabetic retinopathy or macular edema (n=4, focal RPE defects (n=2, age-related macular degeneration (n=1, central serous retinopathy (n=1, and ocular trauma (n=1. Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  19. Fundus autofluorescence imaging in an ocular screening program.

    Science.gov (United States)

    Kolomeyer, A M; Nayak, N V; Szirth, B C; Khouri, A S

    2012-01-01

    Purpose. To describe integration of fundus autofluorescence (FAF) imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters) images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA), intraocular pressure (IOP), and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18%) individuals with diabetic retinopathy or macular edema (n = 4), focal RPE defects (n = 2), age-related macular degeneration (n = 1), central serous retinopathy (n = 1), and ocular trauma (n = 1). Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  20. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. An evaluation of fundus photography and fundus autofluorescence in the diagnosis of cuticular drusen

    DEFF Research Database (Denmark)

    Høeg, Tracy B; Moldow, Birgitte; Klein, Ronald

    2016-01-01

    PURPOSE: To examine non-mydriatic fundus photography (FP) and fundus autofluorescence (FAF) as alternative non-invasive imaging modalities to fluorescein angiography (FA) in the detection of cuticular drusen (CD). METHODS: Among 2953 adults from the Danish Rural Eye Study (DRES) with gradable FP...

  2. Quantitative fundus autofluorescence in recessive Stargardt disease.

    Science.gov (United States)

    Burke, Tomas R; Duncker, Tobias; Woods, Russell L; Greenberg, Jonathan P; Zernant, Jana; Tsang, Stephen H; Smith, R Theodore; Allikmets, Rando; Sparrow, Janet R; Delori, François C

    2014-05-01

    To quantify fundus autofluorescence (qAF) in patients with recessive Stargardt disease (STGD1). A total of 42 STGD1 patients (ages: 7-52 years) with at least one confirmed disease-associated ABCA4 mutation were studied. Fundus AF images (488-nm excitation) were acquired with a confocal scanning laser ophthalmoscope equipped with an internal fluorescent reference to account for variable laser power and detector sensitivity. The gray levels (GLs) of each image were calibrated to the reference, zero GL, magnification, and normative optical media density to yield qAF. Texture factor (TF) was calculated to characterize inhomogeneities in the AF image and patients were assigned to the phenotypes of Fishman I through III. Quantified fundus autofluorescence in 36 of 42 patients and TF in 27 of 42 patients were above normal limits for age. Young patients exhibited the relatively highest qAF, with levels up to 8-fold higher than healthy eyes. Quantified fundus autofluorescence and TF were higher in Fishman II and III than Fishman I, who had higher qAF and TF than healthy eyes. Patients carrying the G1916E mutation had lower qAF and TF than most other patients, even in the presence of a second allele associated with severe disease. Quantified fundus autofluorescence is an indirect approach to measuring RPE lipofuscin in vivo. We report that ABCA4 mutations cause significantly elevated qAF, consistent with previous reports indicating that increased RPE lipofuscin is a hallmark of STGD1. Even when qualitative differences in fundus AF images are not evident, qAF can elucidate phenotypic variation. Quantified fundus autofluorescence will serve to establish genotype-phenotype correlations and as an outcome measure in clinical trials.

  3. Quantitative Fundus Autofluorescence in Recessive Stargardt Disease

    Science.gov (United States)

    Burke, Tomas R.; Duncker, Tobias; Woods, Russell L.; Greenberg, Jonathan P.; Zernant, Jana; Tsang, Stephen H.; Smith, R. Theodore; Allikmets, Rando; Sparrow, Janet R.; Delori, François C.

    2014-01-01

    Purpose. To quantify fundus autofluorescence (qAF) in patients with recessive Stargardt disease (STGD1). Methods. A total of 42 STGD1 patients (ages: 7–52 years) with at least one confirmed disease-associated ABCA4 mutation were studied. Fundus AF images (488-nm excitation) were acquired with a confocal scanning laser ophthalmoscope equipped with an internal fluorescent reference to account for variable laser power and detector sensitivity. The gray levels (GLs) of each image were calibrated to the reference, zero GL, magnification, and normative optical media density to yield qAF. Texture factor (TF) was calculated to characterize inhomogeneities in the AF image and patients were assigned to the phenotypes of Fishman I through III. Results. Quantified fundus autofluorescence in 36 of 42 patients and TF in 27 of 42 patients were above normal limits for age. Young patients exhibited the relatively highest qAF, with levels up to 8-fold higher than healthy eyes. Quantified fundus autofluorescence and TF were higher in Fishman II and III than Fishman I, who had higher qAF and TF than healthy eyes. Patients carrying the G1916E mutation had lower qAF and TF than most other patients, even in the presence of a second allele associated with severe disease. Conclusions. Quantified fundus autofluorescence is an indirect approach to measuring RPE lipofuscin in vivo. We report that ABCA4 mutations cause significantly elevated qAF, consistent with previous reports indicating that increased RPE lipofuscin is a hallmark of STGD1. Even when qualitative differences in fundus AF images are not evident, qAF can elucidate phenotypic variation. Quantified fundus autofluorescence will serve to establish genotype-phenotype correlations and as an outcome measure in clinical trials. PMID:24677105

  4. Processing of A New Digital Orthoimage Map of The Martian Western Hemisphere Using Data Obtained From The Mars Orbiter Camera At A Resolution of 256 Pixel/deg

    Science.gov (United States)

    Wählisch, M.; Niedermaier, G.; van Gasselt, S.; Scholten, F.; Wewel, F.; Roatsch, T.; Matz, K.-D.; Jaumann, R.

    We present a new digital orthoimage map of Mars using data obtained from the CCD line scanner Mars Orbiter Camera (MOC) of the Mars Global Surveyor Mis- sion (MGS) [1,2]. The map covers the Mars surface from 0 to 180 West and from 60 South to 60 North with the MDIM2 resolution of 256 pixel/degree and size. Image data processing has been performed using multiple programs, developed by DLR, Technical University of Berlin [3], JPL, and the USGS. 4,339 Context and 183 Geodesy images [2] were included. After radiometric corrections, the images were Mars referenced [4], geometrically corrected [5] and orthoprojected using a global Martian Digital Terrain Model (DTM) with a resolution of 64 pixel/degree, developed at DLR and based on MGS Mars Orbiter Laser Altimeter (MOLA) data [6]. To elim- inate major differences in brightness between the individual images of the mosaics, high- and low-pass filter processing techniques were applied for each image. After filtering, the images were mosaicked without registering or using block adjustment techniques in order to improve the geometric quality. It turns out that the accuracy of the navigation data has such a good quality that the orthoimages fit very well to each other. When merging the MOC mosaic with the MOLA data using IHS- trans- formation, we recognized very good correspondence between these two datasets. We create a topographic image map of the Coprates region (MC­18) adding contour lines derived from the global DTM to the mosaic. These maps are used for geological and morphological interpretations in order to review and improve our current Viking-based knowledge about the Martian surface. References: [1] www.mssss.com, [2] Caplinger, M. and M. Malin, "The Mars Or- biter Camera Geodesy Campaign, JGR, in press, [3] Scholten, F., Vol XXXI, Part B2, Wien 1996, p.351-356, [4] naïf.jpl.nasa.gov, [5] R.L.Kirk. et al. (2001), "Geometric Calibration of the Mars Orbiter Cameras and Coalignment with Mars Orbiter Laser Altimeter

  5. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov [Pacific Northwest National Laboratory, Richland, Washington 99354 and College of Optical Sciences, The University of Arizona, Tucson, Arizona 85719 (United States); Frost, Sofia H. L.; Frayo, Shani L.; Kenoyer, Aimee L.; Santos, Erlinda; Jones, Jon C.; Orozco, Johnnie J. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 (United States); Green, Damian J.; Press, Oliver W.; Pagel, John M.; Sandmaier, Brenda M. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 and Department of Medicine, University of Washington, Seattle, Washington 98195 (United States); Hamlin, Donald K.; Wilbur, D. Scott [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195 (United States); Fisher, Darrell R. [Dade Moeller Health Group, Richland, Washington 99354 (United States)

    2015-07-15

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ({sup 211}At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10{sup −4} cpm/cm{sup 2} (40 mm diameter detector area

  6. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera.

    Science.gov (United States)

    Miller, Brian W; Frost, Sofia H L; Frayo, Shani L; Kenoyer, Aimee L; Santos, Erlinda; Jones, Jon C; Green, Damian J; Hamlin, Donald K; Wilbur, D Scott; Fisher, Darrell R; Orozco, Johnnie J; Press, Oliver W; Pagel, John M; Sandmaier, Brenda M

    2015-07-01

    Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50-80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ((211)At) activity distributions in cryosections of murine and canine tissue samples. The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10(-4) cpm/cm(2) (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was

  7. Semi-automated retinal vessel analysis in nonmydriatic fundus photography.

    Science.gov (United States)

    Schuster, Alexander Karl-Georg; Fischer, Joachim Ernst; Vossmerbaeumer, Urs

    2014-02-01

    Funduscopic assessment of the retinal vessels may be used to assess the health status of microcirculation and as a component in the evaluation of cardiovascular risk factors. Typically, the evaluation is restricted to morphological appreciation without strict quantification. Our purpose was to develop and validate a software tool for semi-automated quantitative analysis of retinal vasculature in nonmydriatic fundus photography. matlab software was used to develop a semi-automated image recognition and analysis tool for the determination of the arterial-venous (A/V) ratio in the central vessel equivalent on 45° digital fundus photographs. Validity and reproducibility of the results were ascertained using nonmydriatic photographs of 50 eyes from 25 subjects recorded from a 3DOCT device (Topcon Corp.). Two hundred and thirty-three eyes of 121 healthy subjects were evaluated to define normative values. A software tool was developed using image thresholds for vessel recognition and vessel width calculation in a semi-automated three-step procedure: vessel recognition on the photograph and artery/vein designation, width measurement and calculation of central retinal vessel equivalents. Mean vessel recognition rate was 78%, vessel class designation rate 75% and reproducibility between 0.78 and 0.91. Mean A/V ratio was 0.84. Application on a healthy norm cohort showed high congruence with prior published manual methods. Processing time per image was one minute. Quantitative geometrical assessment of the retinal vasculature may be performed in a semi-automated manner using dedicated software tools. Yielding reproducible numerical data within a short time leap, this may contribute additional value to mere morphological estimates in the clinical evaluation of fundus photographs. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  8. Detection of Fundus Lesions Using Classifier Selection

    Science.gov (United States)

    Nagayoshi, Hiroto; Hiramatsu, Yoshitaka; Sako, Hiroshi; Himaga, Mitsutoshi; Kato, Satoshi

    A system for detecting fundus lesions caused by diabetic retinopathy from fundus images is being developed. The system can screen the images in advance in order to reduce the inspection workload on doctors. One of the difficulties that must be addressed in completing this system is how to remove false positives (which tend to arise near blood vessels) without decreasing the detection rate of lesions in other areas. To overcome this difficulty, we developed classifier selection according to the position of a candidate lesion, and we introduced new features that can distinguish true lesions from false positives. A system incorporating classifier selection and these new features was tested in experiments using 55 fundus images with some lesions and 223 images without lesions. The results of the experiments confirm the effectiveness of the proposed system, namely, degrees of sensitivity and specificity of 98% and 81%, respectively.

  9. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  10. Commercial Slit-Lamp Anterior Segment Photography versus Digital Compact Camera Mounted on a Standard Slit-Lamp with an Adapter.

    Science.gov (United States)

    Oliphant, Huw; Kennedy, Alasdair; Comyn, Oliver; Spalton, David J; Nanavaty, Mayank A

    2018-06-16

    To compare slit lamp mounted cameras (SLC) versus digital compact camera (DCC) with slit-lamp adaptor when used by an inexperienced technician. In this cross sectional study, where posterior capsule opacification (PCO) was used as a comparator, patients were consented for one photograph with SLC and two with DCC (DCC1 and DCC2), with a slit-lamp adaptor. An inexperienced clinic technician, who took all the photographs and masked the images, recruited one eye of each patient. Images were graded for PCO using ECPO2000 software by two independent masked graders. Repeatability between DCC1 & DCC2 and limits-of-agreement between SLC and DCC1 mounted on slit-lamp with an adaptor were assessed. Coefficient-of-repeatability and Bland-Altmann plots were analyzed. Seventy-two patients (eyes) were recruited in the study. First 9 patients (eyes) were excluded due to unsatisfactory image quality from both the systems. Mean EPCO score for SLC was 2.28 (95% CI: 2.09 -2.45), for DCC1 was 2.28 (95% CI: 2.11-2.45), and for the DCC2 was 2.11 (95% CI: 2.11-2.45). There was no significant difference in EPCO scores between SLC Vs. DCC1 (p = 0.98) and between DCC1 and DCC 2 (p = 0.97). Coefficient of repeatability between DCC images was 0.42, and the coefficient of repeatability between DCC and SLC was 0.58. DCC on slit-lamp with an adaptor is comparable to a SLC. There is an initial learning curve, which is similar for both for an inexperienced person. This opens up the possibility for low cost anterior segment imaging in the clinical, research and teaching settings.

  11. Fundus Autofluorescence Features of Optic Disc Pit Related ...

    African Journals Online (AJOL)

    chorioretinopathy, retina telangiectasia and diffuse and macula retina dystrophies.[1,3‑8] These features give useful clinical and prognostic information, making FAF a desired day‑to‑day clinical tool. Fundus autoflorescence signals can be detected using 3 different systems, the Delori's fundus. Fundus Autofluorescence ...

  12. 21 CFR 886.1395 - Diagnostic Hruby fundus lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic Hruby fundus lens. 886.1395 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1395 Diagnostic Hruby fundus lens. (a) Identification. A diagnostic Hruby fundus lens is a device that is a 55 diopter lens intended for use in the...

  13. Analysis of visual appearance of retinal nerve fibers in high resolution fundus images: a study on normal subjects.

    Science.gov (United States)

    Kolar, Radim; Tornow, Ralf P; Laemmer, Robert; Odstrcilik, Jan; Mayer, Markus A; Gazarek, Jiri; Jan, Jiri; Kubena, Tomas; Cernosek, Pavel

    2013-01-01

    The retinal ganglion axons are an important part of the visual system, which can be directly observed by fundus camera. The layer they form together inside the retina is the retinal nerve fiber layer (RNFL). This paper describes results of a texture RNFL analysis in color fundus photographs and compares these results with quantitative measurement of RNFL thickness obtained from optical coherence tomography on normal subjects. It is shown that local mean value, standard deviation, and Shannon entropy extracted from the green and blue channel of fundus images are correlated with corresponding RNFL thickness. The linear correlation coefficients achieved values 0.694, 0.547, and 0.512 for respective features measured on 439 retinal positions in the peripapillary area from 23 eyes of 15 different normal subjects.

  14. Analysis of Visual Appearance of Retinal Nerve Fibers in High Resolution Fundus Images: A Study on Normal Subjects

    Directory of Open Access Journals (Sweden)

    Radim Kolar

    2013-01-01

    Full Text Available The retinal ganglion axons are an important part of the visual system, which can be directly observed by fundus camera. The layer they form together inside the retina is the retinal nerve fiber layer (RNFL. This paper describes results of a texture RNFL analysis in color fundus photographs and compares these results with quantitative measurement of RNFL thickness obtained from optical coherence tomography on normal subjects. It is shown that local mean value, standard deviation, and Shannon entropy extracted from the green and blue channel of fundus images are correlated with corresponding RNFL thickness. The linear correlation coefficients achieved values 0.694, 0.547, and 0.512 for respective features measured on 439 retinal positions in the peripapillary area from 23 eyes of 15 different normal subjects.

  15. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  16. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  17. Fundus autofluorescence applications in retinal imaging

    Science.gov (United States)

    Gabai, Andrea; Veritti, Daniele; Lanzetta, Paolo

    2015-01-01

    Fundus autofluorescence (FAF) is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications. PMID:26139802

  18. Analysis of visual pigment by fundus autofluorescence.

    NARCIS (Netherlands)

    Theelen, T.; Berendschot, T.T.; Boon, C.J.F.; Hoyng, C.B.; Klevering, B.J.

    2008-01-01

    This study investigated changes of short-wavelength fundus autofluorescence (SW-AF) by retinal bleaching effects. All measurements were performed with the Heidelberg Retina Angiograph 2 (HRA 2). Initially, experimental imaging was done on a healthy eye after dark adaptation. Photopigment was

  19. Fundus autofluorescence applications in retinal imaging

    Directory of Open Access Journals (Sweden)

    Andrea Gabai

    2015-01-01

    Full Text Available Fundus autofluorescence (FAF is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications.

  20. Fundus reflectance : historical and present ideas

    NARCIS (Netherlands)

    Berendschot, T.T.J.M.; Delint, P.J.; Norren, D. van

    2003-01-01

    In 1851 Helmholtz introduced the ophthalmoscope. The instrument allowed the observation of light reflected at the fundus. The development of this device was one of the major advancements in ophthalmology. Yet ophthalmoscopy allows only qualitative observation of the eye. Since 1950 attempts were

  1. A study of fundus status in myopia

    Directory of Open Access Journals (Sweden)

    Christina Samuel, Sundararajan D

    2014-07-01

    Full Text Available Background: The most important sensory organ for a human is the eye. Any damage to the retina can cause diminution or loss of vision. One of the most important refractive errors of the eye is Myopia apart from hypermetropia and astigmatism. It is one of the commonest conditions seen in everyday practice. Myopic degeneration is one of the common causes of decreased visual acuity. Aim: The aim of this clinical study is to observe the fundus changes associated with Myopia. Methods: A prospective study of 100 cases of myopia were included in this study. Detailed anterior segment and good posterior segment examination after achieving mydriasis was done with a direct ophthalmoscope and indirect ophthalmoscope with 20D lens. Result: In our study, we found that males were more commonly affected than females with myopia (54%. 50% of the cases affected belonged to the student community. 53.68% had positive changes in the retina suggestive of degenerative changes in the fundus. Conclusion: Degenerative changes of fundus are most commonly seen in myopic patients of which Tessellated fundus was about 90.20%. Vitreous degenerative changes for 70.59%. Crescent formation was 87.25%. Dull foveal reflex in 82.35% and lattice degeneration accounted for 40%.

  2. Digitization

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2014-01-01

    what a concept of digital media might add to the understanding of processes of mediatization and what the concept of mediatization might add to the understanding of digital media. It is argued that digital media open an array of new trajectories in human communication, trajectories which were...

  3. A novel method to measure conspicuous facial pores using computer analysis of digital-camera-captured images: the effect of glycolic acid chemical peeling.

    Science.gov (United States)

    Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji

    2011-11-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.

  4. A review of fundus autofluorescence imaging

    Directory of Open Access Journals (Sweden)

    D. J. Booysen

    2013-12-01

    Full Text Available Autofluorscence photography of the retina provides important diagnostic information about diseases that affect the outer retina; more specifically the retinal pigment epithelium and photoreceptors. Fundus autofluorescence can alsobe used to evaluate macular pigment density and other diseases of the retina and choroid. It is a non-invasive clinical tool which has the potential to revolutionise clinical retina practice. (S Afr Optom 2013 72(1 46-53

  5. A review of fundus autofluorescence imaging

    OpenAIRE

    D. J. Booysen

    2013-01-01

    Autofluorscence photography of the retina provides important diagnostic information about diseases that affect the outer retina; more specifically the retinal pigment epithelium and photoreceptors. Fundus autofluorescence can alsobe used to evaluate macular pigment density and other diseases of the retina and choroid. It is a non-invasive clinical tool which has the potential to revolutionise clinical retina practice. (S Afr Optom 2013 72(1) 46-53)

  6. Fundus autofluorescence and the bisretinoids of retina.

    Science.gov (United States)

    Sparrow, Janet R; Wu, Yalin; Nagasaki, Takayuki; Yoon, Kee Dong; Yamamoto, Kazunori; Zhou, Jilin

    2010-11-01

    Imaging of the human fundus of the eye with excitation wavelengths in the visible spectrum reveals a natural autofluorescence, that in a healthy retina originates primarily from the bisretinoids that constitute the lipofuscin of retinal pigment epithelial (RPE) cells. Since the intensity and distribution of fundus autofluorescence is altered in the presence of retinal disease, we have examined the fluorescence properties of the retinal bisretinoids with a view to aiding clinical interpretations. As is also observed for fundus autofluorescence, fluorescence emission from RPE lipofuscin was generated with a wide range of exciting wavelengths; with increasing excitation wavelength, the emission maximum shifted towards longer wavelengths and spectral width was decreased. These features are consistent with fluorescence generation from a mixture of compounds. While the bisretinoids that constitute RPE lipofuscin all fluoresced with maxima that were centered around 600 nm, fluorescence intensities varied when excited at 488 nm, the excitation wavelength utilized for fundus autofuorescence imaging. For instance the fluorescence efficiency of the bisretinoid A2-dihydropyridine-phosphatidylethanolamine (A2-DHP-PE) was greater than A2E and relative to both of the latter, all-trans-retinal dimer-phosphatidylethanolamine was weakly fluorescent. On the other hand, certain photooxidized forms of the bisretinoids present in both RPE and photoreceptor cells were more strongly fluorescent than the parent compound. We also sought to evaluate whether diffuse puncta of autofluorescence observed in some retinal disorders of monogenic origin are attributable to retinoid accumulation. However, two retinoids of the visual cycle, all-trans-retinyl ester and all-trans-retinal, did not exhibit fluorescence at 488 nm excitation.

  7. Dynamic studies of liver and thyroid function with the aid of a gamma camera and an on-line digital computer

    International Nuclear Information System (INIS)

    Raikar, U.R.; Ganatra, R.D.; Samuel, A.M.; Ramanathan, P.; Atmaram, S.H.

    1975-01-01

    Initial experience of dynamic studies with the use of a 16 K digital computer coupled to a scintillation camera is described. 1. A system for scoring bolus was developed taking into account peaking time, the ratio of counts at the peak and at 8 s after the peak and full width at half maximum. Various parameters affecting the boli are discussed. 2. A method is established for finding the slope of net early trapping of sup(99m)TcO 4 in the thyroid after subtraction of extrathyroidal vascular background. This value was found diagnostically useful in establishing the state of thyroid function in 26 patients. 3. Portal extraction half-times of various colloidal radiopharmaceuticals were studied in the first two-minute dynamic study of the liver. This determination provided a method of bioassay for the consistency of the production of colloid for liver scintigraphy. Differences were noted in the trapping time between the right and left lobes of the liver. 4. On the basis of portal extraction half-times, sup(99m)Tc phytate appeared to become colloidal instantaneously after injection into the circulation and its behavior in dynamic studies was more or less identical with that of the sup(99m)Tc-S-colloid. 5. Normal liver has a dual blood supply, while a malignancy in the liver derives blood from only the hepatic artery. Benign lesions such as abscesses and cysts are relatively avascular. This difference in the blood supply of benign and malignant space-occupying lesions in the liver was exploited in an early dynamic study of blood flow to offer a clue to the pathology of cold areas in 170 patients. (author)

  8. The Portable Dynamic Fundus Instrument: Uses in telemedicine and research

    Science.gov (United States)

    Hunter, Norwood; Caputo, Michael; Billica, Roger; Taylor, Gerald; Gibson, C. Robert; Manuel, F. Keith; Mader, Thomas; Meehan, Richard

    1994-01-01

    For years ophthalmic photographs have been used to track the progression of many ocular diseases such as macular degeneration and glaucoma as well as the ocular manifestations of diabetes, hypertension, and hypoxia. In 1987 a project was initiated at the Johnson Space Center (JSC) to develop a means of monitoring retinal vascular caliber and intracranial pressure during space flight. To conduct telemedicine during space flight operations, retinal images would require real-time transmissions from space. Film-based images would not be useful during in-flight operations. Video technology is beneficial in flight because the images may be acquired, recorded, and transmitted to the ground for rapid computer digital image processing and analysis. The computer analysis techniques developed for this project detected vessel caliber changes as small as 3 percent. In the field of telemedicine, the Portable Dynamic Fundus Instrument demonstrates the concept and utility of a small, self-contained video funduscope. It was used to record retinal images during the Gulf War and to transmit retinal images from the Space Shuttle Columbia during STS-50. There are plans to utilize this device to provide a mobile ophthalmic screening service in rural Texas. In the fall of 1993 a medical team in Boulder, Colorado, will transmit real-time images of the retina during remote consultation and diagnosis. The research applications of this device include the capability of operating in remote locations or small, confined test areas. There has been interest shown utilizing retinal imaging during high-G centrifuge tests, high-altitude chamber tests, and aircraft flight tests. A new design plan has been developed to incorporate the video instrumentation into face-mounted goggle. This design would eliminate head restraint devices, thus allowing full maneuverability to the subjects. Further development of software programs will broaden the application of the Portable Dynamic Fundus Instrument in

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  10. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  11. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  12. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  13. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    Science.gov (United States)

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  14. Thickness related textural properties of retinal nerve fiber layer in color fundus images.

    Science.gov (United States)

    Odstrcilik, Jan; Kolar, Radim; Tornow, Ralf-Peter; Jan, Jiri; Budai, Attila; Mayer, Markus; Vodakova, Martina; Laemmer, Robert; Lamos, Martin; Kuna, Zdenek; Gazarek, Jiri; Kubena, Tomas; Cernosek, Pavel; Ronzhina, Marina

    2014-09-01

    Images of ocular fundus are routinely utilized in ophthalmology. Since an examination using fundus camera is relatively fast and cheap procedure, it can be used as a proper diagnostic tool for screening of retinal diseases such as the glaucoma. One of the glaucoma symptoms is progressive atrophy of the retinal nerve fiber layer (RNFL) resulting in variations of the RNFL thickness. Here, we introduce a novel approach to capture these variations using computer-aided analysis of the RNFL textural appearance in standard and easily available color fundus images. The proposed method uses the features based on Gaussian Markov random fields and local binary patterns, together with various regression models for prediction of the RNFL thickness. The approach allows description of the changes in RNFL texture, directly reflecting variations in the RNFL thickness. Evaluation of the method is carried out on 16 normal ("healthy") and 8 glaucomatous eyes. We achieved significant correlation (normals: ρ=0.72±0.14; p≪0.05, glaucomatous: ρ=0.58±0.10; p≪0.05) between values of the model predicted output and the RNFL thickness measured by optical coherence tomography, which is currently regarded as a standard glaucoma assessment device. The evaluation thus revealed good applicability of the proposed approach to measure possible RNFL thinning. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. DIGITAL

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  16. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  17. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  18. Time-resolved imaging of prompt-gamma rays for proton range verification using a knife-edge slit camera based on digital photon counters

    Science.gov (United States)

    Cambraia Lopes, Patricia; Clementel, Enrico; Crespo, Paulo; Henrotin, Sebastien; Huizenga, Jan; Janssens, Guillaume; Parodi, Katia; Prieels, Damien; Roellinghoff, Frauke; Smeets, Julien; Stichelbaut, Frederic; Schaart, Dennis R.

    2015-08-01

    Proton range monitoring may facilitate online adaptive proton therapy and improve treatment outcomes. Imaging of proton-induced prompt gamma (PG) rays using a knife-edge slit collimator is currently under investigation as a potential tool for real-time proton range monitoring. A major challenge in collimated PG imaging is the suppression of neutron-induced background counts. In this work, we present an initial performance test of two knife-edge slit camera prototypes based on arrays of digital photon counters (DPCs). PG profiles emitted from a PMMA target upon irradiation with a 160 MeV proton pencil beams (about 6.5   ×   109 protons delivered in total) were measured using detector modules equipped with four DPC arrays coupled to BGO or LYSO : Ce crystal matrices. The knife-edge slit collimator and detector module were placed at 15 cm and 30 cm from the beam axis, respectively, in all cases. The use of LYSO : Ce enabled time-of-flight (TOF) rejection of background events, by synchronizing the DPC readout electronics with the 106 MHz radiofrequency signal of the cyclotron. The signal-to-background (S/B) ratio of 1.6 obtained with a 1.5 ns TOF window and a 3 MeV-7 MeV energy window was about 3 times higher than that obtained with the same detector module without TOF discrimination and 2 times higher than the S/B ratio obtained with the BGO module. Even 1 mm shifts of the Bragg peak position translated into clear and consistent shifts of the PG profile if TOF discrimination was applied, for a total number of protons as low as about 6.5   ×   108 and a detector surface of 6.6 cm  ×  6.6 cm.

  19. Using ground observations of a digital camera in the VIS-NIR range for quantifying the phenology of Mediterranean woody species

    Science.gov (United States)

    Weil, Gilad; Lensky, Itamar M.; Levin, Noam

    2017-10-01

    The spectral reflectance of most plant species is quite similar, and thus the feasibility of identifying most plant species based on single date multispectral data is very low. Seasonal phenological patterns of plant species may enable to face the challenge of using remote sensing for mapping plant species at the individual level. We used a consumer-grade digital camera with near infra-red capabilities in order to extract and quantify vegetation phenological information in four East Mediterranean sites. After illumination corrections and other noise reduction steps, the phenological patterns of 1839 individuals representing 12 common species were analyzed, including evergreen trees, winter deciduous trees, semi-deciduous summer shrubs and annual herbaceous patches. Five vegetation indices were used to describe the phenology: relative green and red (green/red chromatic coordinate), excess green (ExG), normalized difference vegetation index (NDVI) and green-red vegetation index (GRVI). We found significant differences between the phenology of the various species, and defined the main phenological groups using agglomerative hierarchical clustering. Differences between species and sites regarding the start of season (SOS), maximum of season (MOS) and end of season (EOS) were displayed in detail, using ExG values, as this index was found to have the lowest percentage of outliers. An additional visible band spectral index (relative red) was found as useful for characterizing seasonal phenology, and had the lowest correlation with the other four vegetation indices, which are more sensitive to greenness. We used a linear mixed model in order to evaluate the influences of various factors on the phenology, and found that unlike the significant effect of species and individuals on SOS, MOS and EOS, the sites' location did not have a direct significant effect on the timing of phenological events. In conclusion, the relative advantage of the proposed methodology is the

  20. Automatic Detection of Diabetic Retinopathy in Digital Fundus Photographs

    NARCIS (Netherlands)

    Niemeijer, M.

    2006-01-01

    Diabetic retinopathy is a common ocular complication of diabetes. It is the most frequent cause of blindness in the working population of the United States and the European Union. Early diagnosis, and treatment can prevent vision loss in the majority of cases. Yet only approximately 50% of people

  1. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  2. Automatic multiresolution age-related macular degeneration detection from fundus images

    Science.gov (United States)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  3. FUNDUS AUTOFLUORESCENCE LIFETIMES AND CENTRAL SEROUS CHORIORETINOPATHY.

    Science.gov (United States)

    Dysli, Chantal; Berger, Lieselotte; Wolf, Sebastian; Zinkernagel, Martin S

    2017-11-01

    To quantify retinal fluorescence lifetimes in patients with central serous chorioretinopathy (CSC) and to identify disease specific lifetime characteristics over the course of disease. Forty-seven participants were included in this study. Patients with central serous chorioretinopathy were imaged with fundus photography, fundus autofluorescence, optical coherence tomography, and fluorescence lifetime imaging ophthalmoscopy (FLIO) and compared with age-matched controls. Retinal autofluorescence was excited using a 473-nm blue laser light and emitted fluorescence light was detected in 2 distinct wavelengths channels (498-560 nm and 560-720 nm). Clinical features, mean retinal autofluorescence lifetimes, autofluorescence intensity, and corresponding optical coherence tomography (OCT) images were further analyzed. Thirty-five central serous chorioretinopathy patients with a mean visual acuity of 78 ETDRS letters (range, 50-90; mean Snellen equivalent: 20/32) and 12 age-matched controls were included. In the acute stage of central serous chorioretinopathy, retinal fluorescence lifetimes were shortened by 15% and 17% in the respective wavelength channels. Multiple linear regression analysis showed that fluorescence lifetimes were significantly influenced by the disease duration (P autofluorescence lifetimes, particularly in eyes with retinal pigment epithelial atrophy, were associated with poor visual acuity. This study establishes that autofluorescence lifetime changes occurring in central serous chorioretinopathy exhibit explicit patterns which can be used to estimate perturbations of the outer retinal layers with a high degree of statistical significance.

  4. Fundus autofluorescence patterns in primary intraocular lymphoma.

    Science.gov (United States)

    Casady, Megan; Faia, Lisa; Nazemzadeh, Maryam; Nussenblatt, Robert; Chan, Chi-Chao; Sen, H Nida

    2014-02-01

    To evaluate fundus autofluorescence (FAF) patterns in patients with primary intraocular (vitreoretinal) lymphoma. Records of all patients with primary intraocular lymphoma who underwent FAF imaging at the National Eye Institute were reviewed. Fundus autofluorescence patterns were evaluated with respect to clinical disease status and the findings on fluorescein angiography and spectral-domain optical coherence tomography. There were 18 eyes (10 patients) with primary intraocular lymphoma that underwent FAF imaging. Abnormal autofluorescence in the form of granular hyperautofluorescence and hypoautofluorescence was seen in 11 eyes (61%), and blockage by mass lesion was seen in 2 eyes (11%). All eyes with granular pattern on FAF had active primary intraocular lymphoma at the time of imaging, but there were 5 eyes with unremarkable FAF, which were found to have active lymphoma. The most common pattern on fluorescein angiography was hypofluorescent round spots with a "leopard spot" appearance (43%). These hypofluorescent spots on fluorescein angiography correlated with hyperautofluorescent spots on FAF in 5 eyes (36%) (inversion of FAF). Nodular hyperreflective spots at the level of retinal pigment epithelium on optical coherence tomography were noted in 43% of eyes. The hyperautofluorescent spots on FAF correlated with nodular hyperreflective spots on optical coherence tomography in 6 eyes (43%). Granularity on FAF was associated with active lymphoma in majority of the cases. An inversion of FAF (hyperautofluorescent spots on FAF corresponding to hypofluorescent spots on fluorescein angiography) was observed in less than half of the eyes.

  5. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  6. Optic Disc Detection from Fundus Photography via Best-Buddies Similarity

    Directory of Open Access Journals (Sweden)

    Kangning Hou

    2018-05-01

    Full Text Available Robust and effective optic disc (OD detection is a necessary processing step in the research work of the automatic analysis of fundus images. In this paper, we propose a novel and robust method for the automated detection of ODs from fundus photographs. It is essentially carried out by performing template matching using the Best-Buddies Similarity (BBS measure between the hand-marked OD region and the small parts of target images. For well characterizing the local spatial information of fundus images, a gradient constraint term was introduced for computing the BBS measurement. The performance of the proposed method is validated with Digital Retinal Images for Vessel Extraction (DRIVE and Standard Diabetic Retinopathy Database Calibration Level 1 (DIARETDB1 databases, and quantitative results were obtained. Success rates/error distances of 100%/10.4 pixel and of 97.7%/12.9 pixel, respectively, were achieved. The algorithm has been tested and compared with other commonly used methods, and the results show that the proposed method shows superior performance.

  7. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    Science.gov (United States)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  8. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  9. Quantitative fundus autofluorescence in healthy eyes.

    Science.gov (United States)

    Greenberg, Jonathan P; Duncker, Tobias; Woods, Russell L; Smith, R Theodore; Sparrow, Janet R; Delori, François C

    2013-08-21

    Fundus autofluorescence was quantified (qAF) in subjects with healthy retinae using a standardized approach. The objective was to establish normative data and identify factors that influence the accumulation of RPE lipofuscin and/or modulate the observed AF signal in fundus images. AF images were acquired from 277 healthy subjects (age range: 5-60 years) by employing a Spectralis confocal scanning laser ophthalmoscope (cSLO; 488-nm excitation; 30°) equipped with an internal fluorescent reference. For each image, mean gray level was calculated as the average of eight preset regions, and was calibrated to the reference, zero-laser light, magnification, and optical media density from normative data on lens transmission spectra. Relationships between qAF and age, sex, race/ethnicity, eye color, refraction/axial length, and smoking status were evaluated as was measurement repeatability and the qAF spatial distribution. qAF levels exhibited a significant increase with age. qAF increased with increasing eccentricity up to 10° to 15° from the fovea and was highest superotemporally. qAF values were significantly greater in females, and, compared with Hispanics, qAF was significantly higher in whites and lower in blacks and Asians. No associations with axial length and smoking were observed. For two operators, between-session repeatability was ± 9% and ± 12%. Agreement between the operators was ± 13%. Normative qAF data are a reference tool essential to the interpretation of qAF measurements in ocular disease.

  10. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  11. Multispectral imaging of the ocular fundus using light emitting diode illumination.

    Science.gov (United States)

    Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  12. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  13. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  14. Fundus albipunctatus associated with compound heterozygous mutations in RPE65

    DEFF Research Database (Denmark)

    Schatz, Patrik; Preising, Markus; Lorenz, Birgit

    2011-01-01

    To describe a family with an 18-year-old woman with fundus albipunctatus and compound heterozygous mutations in RPE65 whose unaffected parents and 1 female sibling harbored single heterozygous RPE65 mutations.......To describe a family with an 18-year-old woman with fundus albipunctatus and compound heterozygous mutations in RPE65 whose unaffected parents and 1 female sibling harbored single heterozygous RPE65 mutations....

  15. Long-term fundus changes in acquired partial lipodystrophy.

    Science.gov (United States)

    Jansen, Joyce; Delaere, Lien; Spielberg, Leigh; Leys, Anita

    2013-11-18

    We describe long-term fundus changes in a patient with partial lipodystrophy (PL). Retinal pigment alterations, drusen and subretinal neovascularisation were seen without evidence for membranoproliferative glomerulonephritis. Fundus alterations similar to those seen in age-related macular degeneration can occur at an earlier age in patients with PL, even without renal disease. Dysregulation of an alternative complement pathway with low serum levels of C3 has been implicated as a pathogenetic mechanism.

  16. Analysis of phakic before intraocular lens implantation for fundus examination

    OpenAIRE

    Juan Chen; Zhong-Ping Chen; Rui-Ling Zhu

    2014-01-01

    AIM:To investigate the findings of the eyes which were examined preoperatively by three mirror contact lens before the implantation of implantable collamer lens(ICL). To analysis the retinal pathological changes and to explore the clinical analysis of early diagnosis and treatment in retinopathy on fundus examination before operation. METHODS:The retrospective case series study included 127 eyes of 64 patients who underwent phakic intraocular lens implantation were received the fundus examina...

  17. Dual beam vidicon digitizer

    International Nuclear Information System (INIS)

    Evans, T.L.

    1976-01-01

    A vidicon waveform digitizer which can simultaneously digitize two independent signals has been developed. Either transient or repetitive waveforms can be digitized with this system. A dual beam oscilloscope is used as the signal input device. The light from the oscilloscope traces is optically coupled to a television camera, where the signals are temporarily stored prior to digitizing

  18. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  19. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  20. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  1. Fundus autofluorescence in chronic essential hypertension.

    Science.gov (United States)

    Ramezani, Alireza; Saberian, Peyman; Soheilian, Masoud; Parsa, Saeed Alipour; Kamali, Homayoun Koohi; Entezari, Morteza; Shahbazi, Mohammad-Mehdi; Yaseri, Mehdi

    2014-01-01

    To evaluate fundus autofluorescence (FAF) changes in patients with chronic essential hypertension (HTN). In this case-control study, 35 eyes of 35 patients with chronic essential HTN (lasting >5 years) and 31 eyes of 31 volunteers without history of HTN were included. FAF pictures were taken from right eyes of all cases with the Heidelberg retina angiography and then were assessed by two masked retinal specialists. In total, FAF images including 35 images of hypertensive patients and 31 pictures of volunteers, three apparently abnormal patterns were detected. A ring of hyper-autofluorescence in the central macula (doughnut-shaped) was observed in 9 (25.7%) eyes of the hypertensive group but only in 2 (6.5%) eyes of the control group. This difference was statistically significant (P = 0.036) between two groups. Hypo- and/or hyper-autofluorescence patches outside the fovea were the other sign found more in the hypertensive group (22.9%) than in the control group (6.5%); however, the difference was not statistically significant (P = 0.089). The third feature was hypo-autofluorescence around the disk noticed in 11 (31.4%) eyes of hypertensive patients compared to 8 (25.8%) eyes of the controls (P = 0.615). A ring of hyper-autofluorescence in the central macula forming a doughnut-shaped feature may be a FAF sign in patients with chronic essential HTN.

  2. Fundus Autofluorescence in Chronic Essential Hypertension

    Directory of Open Access Journals (Sweden)

    Alireza Ramezani

    2014-01-01

    Full Text Available Purpose: To evaluate fundus autofluorescence (FAF changes in patients with chronic essential hypertension (HTN. Methods: In this case-control study, 35 eyes of 35 patients with chronic essential HTN (lasting >5 years and 31 eyes of 31 volunteers without history of HTN were included. FAF pictures were taken from right eyes of all cases with the Heidelberg retina angiography and then were assessed by two masked retinal specialists. Results: In total, FAF images including 35 images of hypertensive patients and 31 pictures of volunteers, three apparently abnormal patterns were detected. A ring of hyper-autofluorescence in the central macula (doughnut-shaped was observed in 9 (25.7% eyes of the hypertensive group but only in 2 (6.5% eyes of the control group. This difference was statistically significant (P = 0.036 between two groups. Hypo- and/or hyper-autofluorescence patches outside the fovea were the other sign found more in the hypertensive group (22.9% than in the control group (6.5%; however, the difference was not statistically significant (P = 0.089. The third feature was hypo-autofluorescence around the disk noticed in 11 (31.4% eyes of hypertensive patients compared to 8 (25.8% eyes of the controls (P = 0.615. Conclusion: A ring of hyper-autofluorescence in the central macula forming a doughnut-shaped feature may be a FAF sign in patients with chronic essential HTN.

  3. Comparison of subjective and objective methods to determine the retinal arterio-venous ratio using fundus photography

    Directory of Open Access Journals (Sweden)

    Rebekka Heitmar

    2015-10-01

    Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres.

  4. 3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images

    Science.gov (United States)

    Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom

    Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.

  5. Fundus Autofluorescence and Photoreceptor Cell Rosettes in Mouse Models

    Science.gov (United States)

    Flynn, Erin; Ueda, Keiko; Auran, Emily; Sullivan, Jack M.; Sparrow, Janet R.

    2014-01-01

    Purpose. This study was conducted to study correlations among fundus autofluorescence (AF), RPE lipofuscin accumulation, and photoreceptor cell degeneration and to investigate the structural basis of fundus AF spots. Methods. Fundus AF images (55° lens; 488-nm excitation) and spectral-domain optical coherence tomography (SD-OCT) scans were acquired in pigmented Rdh8−/−/Abca4−/− mice (ages 1–9 months) with a confocal scanning laser ophthalmoscope (cSLO). For quantitative fundus AF (qAF), gray levels (GLs) were calibrated to an internal fluorescence reference. Retinal bisretinoids were measured by quantitative HPLC. Histometric analysis of outer nuclear layer (ONL) thicknesses was performed, and cryostat sections of retina were examined by fluorescence microscopy. Results. Quantified A2E and qAF intensities increased until age 4 months in the Rdh8−/−/Abca4−/− mice. The A2E levels declined after 4 months of age, but qAF intensity values continued to rise. The decline in A2E levels in the Rdh8−/−/Abca4−/− mice paralleled reduced photoreceptor cell viability as reflected in ONL thinning. Hyperautofluorescent puncta in fundus AF images corresponded to photoreceptor cell rosettes in SD-OCT images and histological sections stained with hematoxylin and eosin. The inner segment/outer segment–containing core of the rosette emitted an autofluorescence detected by fluorescence microscopy. Conclusions. When neural retina is disordered, AF from photoreceptor cells can contribute to noninvasive fundus AF images. Hyperautofluorescent puncta in fundus AF images are attributable, in at least some cases, to photoreceptor cell rosettes. PMID:25015357

  6. Fundus fluorescence Angiography in diagnosing diabetic retinopathy.

    Science.gov (United States)

    Wang, Shuhui; Zuo, Yuqin; Wang, Ning; Tong, Bin

    2017-01-01

    To investigate the manifestation characteristics of fundus fluorescence angiography (FFA) and its values in diagnosing diabetic retinopathy through comparing direct ophthalmoscopy. Two hundred fifty patients (500 eyes) who were suspected as diabetic retinopathy and admitted to the hospital between February 2015 and December 2016 were selected. They underwent direct ophthalmoscopy and FFA. The manifestation characteristics of FFA in the diagnosis of diabetic retinopathy were summarized. The two examination methods were compared. In the diagnosis with direct ophthalmoscopy, 375 eyes out of 500 eyes were diagnosed as diabetic retinopathy (75%); there were 74 eyes at stage I, 88 eyes at stage II, 92 eyes at stage III, 83 eyes of stage IV, 28 eyes of stage V and 10 eyes of stage VI. In the diagnosis with FFA, 465 eyes out of 500 eyes were diagnosed as diabetic retinopathy (93%); there were 94 eyes at stage I, 110 eyes at stage II, 112 at stage III, 92 eyes at stage IV, 41 eyes at stage V and 16 eyes at stage VI. The detection rate of diabetic retinopathy using FFA was significantly higher than that using direct ophthalmoscopy (Pretinopathy (67.96%), 75 eyes had pre-proliferative lesions (16.13%), 149 eyes had proliferative lesions (32.04%), 135 eyes had diabetic maculopathy (29.03%) and 31 eyes had diabetic optic disc lesions (6.67%). The detection rate of diabetic retinopathy using FFA is higher than that using direct ophthalmoscopy. FFA could accurately determine clinical stage. Therefore, it is an important approach in treatment efficacy evaluation and treatment guidance, suggesting a significant application value.

  7. A Method of Drusen Measurement Based on the Geometry of Fundus Reflectance

    Directory of Open Access Journals (Sweden)

    Barbazetto Irene

    2003-04-01

    Full Text Available Abstract Background The hallmarks of age-related macular degeneration, the leading cause of blindness in the developed world, are the subretinal deposits known as drusen. Drusen identification and measurement play a key role in clinical studies of this disease. Current manual methods of drusen measurement are laborious and subjective. Our purpose was to expedite clinical research with an accurate, reliable digital method. Methods An interactive semi-automated procedure was developed to level the macular background reflectance for the purpose of morphometric analysis of drusen. 12 color fundus photographs of patients with age-related macular degeneration and drusen were analyzed. After digitizing the photographs, the underlying background pattern in the green channel was leveled by an algorithm based on the elliptically concentric geometry of the reflectance in the normal macula: the gray scale values of all structures within defined elliptical boundaries were raised sequentially until a uniform background was obtained. Segmentation of drusen and area measurements in the central and middle subfields (1000 μm and 3000 μm diameters were performed by uniform thresholds. Two observers using this interactive semi-automated software measured each image digitally. The mean digital measurements were compared to independent stereo fundus gradings by two expert graders (stereo Grader 1 estimated the drusen percentage in each of the 24 regions as falling into one of four standard broad ranges; stereo Grader 2 estimated drusen percentages in 1% to 5% intervals. Results The mean digital area measurements had a median standard deviation of 1.9%. The mean digital area measurements agreed with stereo Grader 1 in 22/24 cases. The 95% limits of agreement between the mean digital area measurements and the more precise stereo gradings of Grader 2 were -6.4 % to +6.8 % in the central subfield and -6.0 % to +4.5 % in the middle subfield. The mean absolute

  8. The Spectrum of Fundus Autofluorescence Findings in Birdshot Chorioretinopathy

    Directory of Open Access Journals (Sweden)

    GianPaolo Giuliari

    2009-01-01

    Full Text Available Objective. To describe the diverse patterns observed with the use of autofluorescence fundus photography (FAF in patients with Birdshot chorioretinopathy (BSCR. Methods. A chart review of patients with BSCR seen at the Massachusetts Eye Research and Surgery Institution, who had autofluorescence fundus photography. The data obtained included age, gender, presence of the HLA-A29 haplotype, and current treatment. Results. Eighteen eyes with HLA-A29 associated BSCR were included. Four eyes presented with active inflammation. Correspondence of the lesions noted in the colour fundus photograph was observed in 3 eyes which were more easily identified with the FAF. Fifteen eyes had fundus lesions more numerous and evident in the FAF than in the colour fundus photograph. Conclusion. Because FAF testing provides valuable insight into the metabolic state of the PR/RPE-complex, it may serve as a useful noninvasive assessment tool in patients with posterior uveitis in which the outer retina-RPE-choriocapillaries-complex is involved.

  9. Imaging Emission Spectra with Handheld and Cellphone Cameras

    Science.gov (United States)

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  10. The making of analog module for gamma camera interface

    International Nuclear Information System (INIS)

    Yulinarsari, Leli; Rl, Tjutju; Susila, Atang; Sukandar

    2003-01-01

    The making of an analog module for gamma camera has been conducted. For computerization of planar gamma camera 37 PMT it has been developed interface hardware technology and software between the planar gamma camera with PC. With this interface gamma camera image information (Originally analog signal) was changed to digital single, therefore processes of data acquisition, image quality increase and data analysis as well as data base processing can be conducted with the help of computers, there are three gamma camera main signals, i.e. X, Y and Z . This analog module makes digitation of analog signal X and Y from the gamma camera that conveys position information coming from the gamma camera crystal. Analog conversion to digital was conducted by 2 converters ADC 12 bit with conversion time 800 ns each, conversion procedure for each coordinate X and Y was synchronized using suitable strobe signal Z for information acceptance

  11. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  12. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  13. An evaluation of fundus photography and fundus autofluorescence in the diagnosis of cuticular drusen.

    Science.gov (United States)

    Høeg, Tracy B; Moldow, Birgitte; Klein, Ronald; La Cour, Morten; Klemp, Kristian; Erngaard, Ditte; Ellervik, Christina; Buch, Helena

    2016-03-01

    To examine non-mydriatic fundus photography (FP) and fundus autofluorescence (FAF) as alternative non-invasive imaging modalities to fluorescein angiography (FA) in the detection of cuticular drusen (CD). Among 2953 adults from the Danish Rural Eye Study (DRES) with gradable FP, three study groups were selected: (1) All those with suspected CD without age-related macular degeneration (AMD) on FP, (2) all those with suspected CD with AMD on FP and (3) a randomly selected group with early AMD. Groups 1, 2 and 3 underwent FA and FAF and group 4 underwent FAF only as part of DRES CD substudy. Main outcome measures included percentage of correct positive and correct negative diagnoses, Cohen's κ and prevalence-adjusted and bias-adjusted κ (PABAK) coefficients of test and grader reliability. CD was correctly identified on FP 88.9% of the time and correctly identified as not being present 83.3% of the time. CD was correctly identified on FAF 62.0% of the time and correctly identified as not being present 100.0% of the time. Compared with FA, FP has a PABAK of 0.75 (0.60 to 1.5) and FAF a PABAK of 0.44 (0.23 to 0.95). FP is a promising, non-invasive substitute for FA in the diagnosis of CD. FAF was less reliable than FP to detect CD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Fundus autofluorescence in retinal artery occlusion: A more precise diagnosis.

    Science.gov (United States)

    Bacquet, J-L; Sarov-Rivière, M; Denier, C; Querques, G; Riou, B; Bonin, L; Barreau, E; Labetoulle, M; Rousseau, A

    2017-10-01

    Retinal artery occlusion (RAO) is a medical emergency associated with a high risk of cerebral vascular accident and other cardiovascular events. Among patients with non-arteritic RAO, a retinal embolus is observed in approximately 40% of cases. Fundus examination and retinography are not reliable to predict the nature of the emboli. We report three consecutive cases of central and branch RAO that were investigated with fundus autofluorescence, fluorescein angiography and color retinal photographs. All patients underwent complete neurological and cardiovascular workups, with brain imaging, cardiac Doppler ultrasound, carotid Dopplers and Holter ECG's, to determine the underlying mechanism of retinal embolism. In the three cases, aged 77.7±4 years (2 women and 1 man), fundus autofluorescence demonstrated hyperautofluorescent emboli. In two cases, it allowed visualization of emboli that were not detected with fundus examination or retinography. The cardiovascular work-up demonstrated atheromatous carotid or aortic plaques in all patients. In one case, it permitted the diagnosis of RAO. Two of the three cases were considered to be of atherosclerotic origin and one of undefined origin. Fundus autofluorescence may help to detect and characterize retinal emboli. Since lipofuscin, which is present in large quantity in atherosclerotic plaques, is the main fluorophore detected with fundus autofluorescence, this non-invasive and simple examination may give information about the underlying mechanism of retinal embolism, and thus impact the etiologic assessment of RAO. Additional studies are necessary to confirm this potential role of autofluorescence. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  15. Personal identification based on blood vessels of retinal fundus images

    Science.gov (United States)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  16. Krypton red laser photocoagulation of the ocular fundus. 1982.

    Science.gov (United States)

    Yannuzzi, Lawrence A; Shakin, Jeffrey L

    2012-02-01

    The theoretical rationale, the histopathologic evidence, and the preliminary clinical studies related to krypton red laser (KRL) photocoagulation of the ocular fundus are reviewed. The authors report on their experience with currently available laser systems using this wavelength (647.1 nm) for photocoagulation of retinal vascular proliferative diseases and chorioretinal diseases associated with exudative manifestations. A histopathologic and clinical comparison of argon blue-green laser (ABGL), the pure argon green laser (AGL), and the krypton yellow laser (KYL), with reference to photocoagulation treatment of the ocular fundus is also discussed.

  17. Fundus Findings in Dengue Fever: A Case Report

    Directory of Open Access Journals (Sweden)

    Berna Şahan

    2015-10-01

    Full Text Available Dengue fever is a flavivirus infection transmitted through infected mosquitoes, and is endemic in Southeast Asia, Central and South America, the Pacific, Africa and the Eastern Mediterranean region. A 41-year-old male patient had visual impairment after travelling to Thailand, which is one of the endemic areas. Cotton wool spots were observed on fundus examination. Fundus fluorescein angiography showed minimal vascular leakage from areas near the cotton wool spots and dot hemorrhages in the macula. Dengue fever should be considered in patients with visual complaints who traveled to endemic areas of dengue fever. (Turk J Ophthalmol 2015; 45: 223-225

  18. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  19. Quantitative analysis of digital outcrop data obtained from stereo-imagery using an emulator for the PanCam camera system for the ExoMars 2020 rover

    Science.gov (United States)

    Barnes, Robert; Gupta, Sanjeev; Gunn, Matt; Paar, Gerhard; Balme, Matt; Huber, Ben; Bauer, Arnold; Furya, Komyo; Caballo-Perucha, Maria del Pilar; Traxler, Chris; Hesina, Gerd; Ortner, Thomas; Banham, Steven; Harris, Jennifer; Muller, Jan-Peter; Tao, Yu

    2017-04-01

    A key focus of planetary rover missions is to use panoramic camera systems to image outcrops along rover traverses, in order to characterise their geology in search of ancient life. This data can be processed to create 3D point clouds of rock outcrops to be quantitatively analysed. The Mars Utah Rover Field Investigation (MURFI 2016) is a Mars Rover field analogue mission run by the UK Space Agency (UKSA) in collaboration with the Canadian Space Agency (CSA). It took place between 22nd October and 13th November 2016 and consisted of a science team based in Harwell, UK, and a field team including an instrumented Rover platform at the field site near Hanksville (Utah, USA). The Aberystwyth University PanCam Emulator 3 (AUPE3) camera system was used to collect stereo panoramas of the terrain the rover encountered during the field trials. Stereo-imagery processed in PRoViP is rendered as Ordered Point Clouds (OPCs) in PRo3D, enabling the user to zoom, rotate and translate the 3D outcrop model. Interpretations can be digitised directly onto the 3D surface, and simple measurements can be taken of the dimensions of the outcrop and sedimentary features, including grain size. Dip and strike of bedding planes, stratigraphic and sedimentological boundaries and fractures is calculated within PRo3D from mapped bedding contacts and fracture traces. Merging of rover-derived imagery with UAV and orbital datasets, to build semi-regional multi-resolution 3D models of the area of operations for immersive analysis and contextual understanding. In-simulation, AUPE3 was mounted onto the rover mast, collecting 16 stereo panoramas over 9 'sols'. 5 out-of-simulation datasets were collected in the Hanksville-Burpee Quarry. Stereo panoramas were processed using an automated pipeline and data transfer through an ftp server. PRo3D has been used for visualisation and analysis of this stereo data. Features of interest in the area could be annotated, and their distances between to the rover

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  2. Lights, camera, action research: The effects of didactic digital movie making on students' twenty-first century learning skills and science content in the middle school classroom

    Science.gov (United States)

    Ochsner, Karl

    Students are moving away from content consumption to content production. Short movies are uploaded onto video social networking sites and shared around the world. Unfortunately they usually contain little to no educational value, lack a narrative and are rarely created in the science classroom. According to new Arizona Technology standards and ISTE NET*S, along with the framework from the Partnership for 21st Century Learning Standards, our society demands students not only to learn curriculum, but to think critically, problem solve effectively, and become adept at communicating and collaborating. Didactic digital movie making in the science classroom may be one way that these twenty-first century learning skills may be implemented. An action research study using a mixed-methods approach to collect data was used to investigate if didactic moviemaking can help eighth grade students learn physical science content while incorporating 21st century learning skills of collaboration, communication, problem solving and critical thinking skills through their group production. Over a five week period, students researched lessons, wrote scripts, acted, video recorded and edited a didactic movie that contained a narrative plot to teach a science strand from the Arizona State Standards in physical science. A pretest/posttest science content test and KWL chart was given before and after the innovation to measure content learned by the students. Students then took a 21st Century Learning Skills Student Survey to measure how much they perceived that communication, collaboration, problem solving and critical thinking were taking place during the production. An open ended survey and a focus group of four students were used for qualitative analysis. Three science teachers used a project evaluation rubric to measure science content and production values from the movies. Triangulating the science content test, KWL chart, open ended questions and the project evaluation rubric, it

  3. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  4. Ophthalmoscopy versus non-mydriatic fundus photography in the ...

    African Journals Online (AJOL)

    The contribution of non-mydriatic fundus photography in the detection of diabetic retinopathy before and after dilatation of the pupils in black diabetics was investigated and compared with direct ophthalmoscopy. Eighty-six patients were examined and good-quality photographs were obtained for 54,7% of eyes before and ...

  5. Fundus autofluorescence findings in a mouse model of retinal detachment.

    Science.gov (United States)

    Secondi, Roberta; Kong, Jian; Blonska, Anna M; Staurenghi, Giovanni; Sparrow, Janet R

    2012-08-07

    Fundus autofluorescence (fundus AF) changes were monitored in a mouse model of retinal detachment (RD). RD was induced by transscleral injection of hyaluronic acid (Healon) or sterile balanced salt solution (BSS) into the subretinal space of 4-5-day-old albino Abca4 null mutant and Abca4 wild-type mice. Images acquired by confocal scanning laser ophthalmoscopy (Spectralis HRA) were correlated with spectral domain optical coherence tomography (SD-OCT), infrared reflectance (IR), fluorescence spectroscopy, and histologic analysis. Results. In the area of detached retina, multiple hyperreflective spots in IR images corresponded to punctate areas of intense autofluorescence visible in fundus AF mode. The puncta exhibited changes in fluorescence intensity with time. SD-OCT disclosed undulations of the neural retina and hyperreflectivity of the photoreceptor layer that likely corresponded to histologically visible photoreceptor cell rosettes. Fluorescence emission spectra generated using flat-mounted retina, and 488 and 561 nm excitation, were similar to that of RPE lipofuscin. With increased excitation wavelength, the emission maximum shifted towards longer wavelengths, a characteristic typical of fundus autofluorescence. In detached retinas, hyper-autofluorescent spots appeared to originate from photoreceptor outer segments that were arranged within retinal folds and rosettes. Consistent with this interpretation is the finding that the autofluorescence was spectroscopically similar to the bisretinoids that constitute RPE lipofuscin. Under the conditions of a RD, abnormal autofluorescence may arise from excessive production of bisretinoid by impaired photoreceptor cells.

  6. Use of fundus autofluorescence images to predict geographic atrophy progression.

    Science.gov (United States)

    Bearelly, Srilaxmi; Khanifar, Aziz A; Lederer, David E; Lee, Jane J; Ghodasra, Jason H; Stinnett, Sandra S; Cousins, Scott W

    2011-01-01

    Fundus autofluorescence imaging has been shown to be helpful in predicting progression of geographic atrophy (GA) secondary to age-related macular degeneration. We assess the ability of fundus autofluorescence imaging to predict rate of GA progression using a simple categorical scheme. Subjects with GA secondary to age-related macular degeneration with fundus autofluorescence imaging acquired at least 12 months apart were included. Rim area focal hyperautofluorescence was defined as percentage of the 500-μm-wide margin bordering the GA that contained increased autofluorescence. Rim area focal hyperautofluorescence on baseline fundus autofluorescence images was assessed and categorized depending on the extent of rim area focal hyperautofluorescence (Category 1: ≤33%; Category 2: between 33 and 67%; Category 3: ≥67%). Total GA areas at baseline and follow-up were measured to calculate change in GA progression. Forty-five eyes of 45 subjects were included; average duration of follow-up was 18.5 months. Median growth rates differed among categories of baseline rim area focal hyperautofluorescence (P = 0.01 among Categories 1, 2, and 3; P = 0.008 for Category 1 compared with Category 3, Jonckheere-Terpstra test). A simple categorical scheme that stratifies the amount of increased autofluorescence in the 500-μm margin bordering GA may be used to differentiate faster and slower progressors.

  7. Fundus autofluorescence features of optic disc pit related maculopathy

    African Journals Online (AJOL)

    Fundus autoflorescence (FAF) is a new investigational tool used to identify lipofuscin distribution in the retinal pigment epithelium (RPE) cell monolayer. It has recently been used to analyze age‑related macular degeneration, central serous chorioretinopathy, retina telangiectasia and diffuse and macula retina dystrophies.

  8. Using camera traps and digital video to investigate the impact of Aethina tumida pest on honey bee (Apis mellifera adansonii reproduction and ability to keep away elephants (Loxodonta africana cyclotis in Gamba, Gabon

    Directory of Open Access Journals (Sweden)

    Steeve Ngama

    2018-06-01

    Full Text Available Bees and elephant interactions are the core of a conservation curiosity since it has been demonstrated that bees, one of the smallest domesticated animals, can keep away elephants, the largest terrestrial animals. Yet, insects' parasites can impact the fitness and activity of the bees. Since their activity is critical to the repellent ability against elephants, this study assessed the impact of small hive beetles (Aethina tumida on bee (Apis mellifera adansonii reproduction and ability to keep forest elephants (Loxodonta africana cyclotis away. Because interspecies interactions are not easy to investigate, we have used camera traps and digital video to observe the activity of bees and their interactions with wild forest elephants under varying conditions of hive infestation with the small hive beetle, a common bee pest. Our results show that queen cells are good visual indicators of colony efficiency on keeping away forest elephants. We give evidences that small hive beetles are equivalently present in large and small bee colonies. Yet, results show no worries about the use of bees as elephant deterrents because of parasitism due to small hive beetles. Apis mellifera adansonii bees seem to effectively cope with small hive beetles showing no significant influence on its reproduction and ability to keep elephants away. This study also reports for the first time the presence of Aethina tumida as a constant beekeeping pest that needs to be addressed in Gabon.

  9. Imagers for digital still photography

    Science.gov (United States)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  10. Diagnostic accuracy of direct ophthalmoscopy for detection of diabetic retinopathy using fundus photographs as a reference standard.

    Science.gov (United States)

    Ahsan, Shahid; Basit, Abdul; Ahmed, Kazi Rumana; Ali, Liaquat; Shaheen, Fariha; Ulhaque, Muhammad Saif; Fawwad, Asher

    2014-01-01

    To determine the diagnostic accuracy of direct ophthalmoscopy for the presence and severity of diabetic retinopathy (DR) using fundus photographs as a reference standard. Patients with type 2 diabetes attending the outpatient department (OPD) of a tertiary care diabetes center, from October 2009 to March 2010 were recruited in the study after obtaining signed informed consent. Patients with type 1 diabetes and gestational diabetes or having eye problems were excluded. After checking visual acuity, direct ophthalmoscopy of each eye was done by diabetologist, followed by photography of two fields of retina by fundus camera. DR was graded by a retinal specialist, according to International Diabetic Retinopathy Disease Severity Scale. According to severity, patients with DR were grouped into non-sight threatening diabetic retinopathy (NSTDR) and sight threatening diabetic retinopathy (STDR). Sensitivity and specificity of direct ophthalmoscopy for detection of any retinopathy, NSTDR and STDR was calculated. A total of 728 eyes were examined by direct ophthalmoscopy as well as fundus photography. Sensitivity (95% CI) of direct ophthalmoscopy for any retinopathy, NSTDR and STDR was found to be 55.67% (50.58-60.78), 37.63% (32.67-42.59) and 68.25% (63.48-73.02) respectively. Whereas, specificity of direct ophthalmoscopy was found to be 76.78% (72.45-81.11), 71.27% (CI: 66.63-75.91) and 90.0% (86.93-93.07) for any retinopathy, NSTDR and STDR respectively. The sensitivity and specificity of direct ophthalmoscopy performed by the diabetologist for the presence and severity of DR was lower compared to the recommended level of sensitivity and specificity of a screening test of DR. Copyright © 2014 Diabetes India. Published by Elsevier Ltd. All rights reserved.

  11. Fundus autofluorescence in blunt ocular trauma

    Directory of Open Access Journals (Sweden)

    Ricardo Luz Leitão Guerra

    2014-06-01

    Full Text Available Objetivo: Descrever os achados do exame de autofluorescência do fundo de olho (AFF em pacientes vítimas de trauma ocular contuso. Métodos: Estudo retrospectivo, não intervencionista, realizado através da revisão de prontuários e exames de imagem. Os dados analisados foram: sexo, idade, lateralidade, etiologia do trauma, tempo decorrente entre o trauma e a realização do exame, acuidade visual, alterações na periferia da retina, diagnóstico fundoscópico e achados ao exame de AFF (realizada no aparelho Topcon TRC-50DX Retinal Camera. Resultados: Oito olhos de 8 pacientes foram estudados. A idade média foi de 27,6 anos (de 19 a 43 anos, o sexo masculino (n=7 foi mais acometido do que o feminino (n=1, agressão física foi a etiologia mais comum do trauma (n=3, seguido de acidente com fogos de artifício (n=2. Outras causas foram acidente automobilístico (n=1, trauma ocupacional com lixadeira (n=1 e pedrada (n=1. A acuidade visual variou de 20/80 a percepção luminosa. Epiteliopatia pigmentar traumática (EPT foi identificada em 5 casos, rotura de coroide em 3, hemorragia subretiniana em 3 e retinopatia de Purtscher em 1 caso. Hipoautofluorescência foi observada nos casos de rotura de coroide, hemorragia subretiniana recente, hemorragia intrarretiniana e em 2 casos de EPT. Hiperautofluorescência foi visualizada nos casos de hemorragia subretiniana em degradação, na borda de 2 casos de roturas de coroide e discretamente no polo posterior na retinopatia de Purtcher. Três casos de EPT apresentaram hipoautofluorescência com pontos hiperautofluorescentes difusos. Conclusão: O exame de AFF permite avaliar as alterações do segmento posterior do olho decorrentes do trauma ocular contuso de forma não invasiva, somando informações valiosas. Foram descritos achados do exame em casos de epiteliopatia pigmentar traumática, rotura de coroide, hemorragia sub-retiniana e retinopatia de Purtscher.

  12. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  13. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  14. Clinical Investigation of Radiation Retinopathy Fundus and Fluorescein Angiographic Features

    Institute of Scientific and Technical Information of China (English)

    LiMei; QiuGT

    1999-01-01

    Purpose:To investigate the fundus and fluorescein angiographic features in the patients with radiation retinopathy.Clinical Materials:Color fundus photography and/or fluorescein angiography from 13 patients with nasopharyngeal carcinomas received external beam radiation were retrospectively analyzed.Reslts:In this study,26 damaged eyes of 13 patients eveloped some degree of radiation retinopathy.The earliest and most common finding was macular microvascular changes (microaneurysms and/or telangiectasia),which was observed in 100%(26/26)of the eyes.Intraretinal hemorrhages,macular capillary nonperfusion,and macular edema were noted in 84%,50%,and 42% of the eyes,respectively.Conclusions:Radiation retinopathy is common after external beam radiation of nasopharyngeal carcinomas.The prominent changes include maular microvascular changes,intraretinal hemorrhages and macular capillary nonperfusion.

  15. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  16. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  17. Interactive segmentation for geographic atrophy in retinal fundus images

    OpenAIRE

    Lee, Noah; Smith, R. Theodore; Laine, Andrew F.

    2008-01-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12–21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of patho...

  18. Case Report of Bullous Pemphigoid following Fundus Fluorescein Angiography

    Directory of Open Access Journals (Sweden)

    Goktug Demirci

    2010-05-01

    Full Text Available Purpose: To report a first case of bullous pemphigoid (BP following intravenous fluorescein for fundus angiography. Clinical Features: A 70-year-old male patient was admitted to the intensive care unit with BP and sepsis. He reported a history of fundus fluorescein angiography with a pre-diagnosis of senile macular degeneration 2 months prior to presentation. At that time, fluorescein extravasated at the antecubital region. Following the procedure, pruritus and erythema began at the wrists bilaterally, and quickly spread to the entire body. The patient also reported a history of allergy to human albumin solution (Plamasteril®; Abbott 15 years before, during bypass surgery. On dermatologic examination, erythematous patches were present on the scalp, chest and anogenital region. Vesicles and bullous lesions were present on upper and lower extremities. On day 2 of hospitalization, tense bullae appeared on the upper and lower extremities. The patient was treated with oral methylprednisolone 48 mg (Prednol®; Mustafa Nevzat, topical clobetasol dipropionate 0.05% cream (Dermovate®; Glaxo SmithKline, and topical 4% urea lotion (Excipial Lipo®; Orva for presumptive bullous pemphigoid. Skin punch biopsy provided tissue for histopathology, direct immunofluorescence examination, and salt extraction, which were all consistent with BP. After 1 month, the patient was transferred to the intensive care unit with sepsis secondary to urinary tract infection; he died 2 weeks later from sepsis and cardiac failure. Conclusions: To our knowledge, this is the first reported case of BP following fundus fluorescein angiography in a patient with known human albumin solution allergy. Consideration should be made to avoid fluorescein angiography, change administration route, or premedicate with antihistamines in patients with known human albumin solution allergy. The association between fundus fluorescein angiography and BP should be further investigated.

  19. Analysis of phakic before intraocular lens implantation for fundus examination

    Directory of Open Access Journals (Sweden)

    Juan Chen

    2014-10-01

    Full Text Available AIM:To investigate the findings of the eyes which were examined preoperatively by three mirror contact lens before the implantation of implantable collamer lens(ICL. To analysis the retinal pathological changes and to explore the clinical analysis of early diagnosis and treatment in retinopathy on fundus examination before operation. METHODS:The retrospective case series study included 127 eyes of 64 patients who underwent phakic intraocular lens implantation were received the fundus examination by three mirror from April 2011 to April 2012 in our hospital. The age, refractive diopter, the findings of Goldmann three mirror examination and the condition of retinal photocoagulation were analysed and concluded.RESULTS: A total of 34 eyes(26.8%out of all 127 eyes(64 caseswere found to have peripheral retinal pathological changes. Eight eyes(6.3%with retinal holes, 15 eyes(11.8%with retinal lattice degeneration, 5 eyes(3.9%with retina cream degeneration, 3 eyes(2.4%with retinal paving stone degeneration,2 eyes with vitreoretinal adhesion and traction,1 eye(0.8%with retinal hemorrhage. Twenty-five cases were given retinal photocoagulation and then received the ICL implantation after 3mo. The follow-up time was 1a. No retinal detachment happened.CONCLUSION:Phakic before intraocular lens implantation for fundus examination by three mirror is contributed to find the peripheral retinal pathological changes and abnormity. And make the appropriate treatment before operation for improving the security of operation, it can also give help to the postoperative follow-up of the fundus of these patients.

  20. Feasibility and quality of nonmydriatic fundus photography in children

    Science.gov (United States)

    Toffoli, Daniela; Bruce, Beau B.; Lamirel, Cédric; Henderson, Amanda D.; Newman, Nancy J.; Biousse, Valérie

    2011-01-01

    Purpose Ocular funduscopic examination is difficult in young children and is rarely attempted by nonophthalmologists. Our objective was to determine the feasibility of reliably obtaining high-quality nonmydriatic fundus photographs in children. Methods Nonmydriatic fundus photographs were obtained in both eyes of children seen in a pediatric ophthalmology clinic. Ease of fundus photography was recorded on a 10-point Likert scale (10 = very easy). Quality was graded from 1 to 5 (1, inadequate for any diagnostic purpose; 2, unable to exclude all emergent findings; 3, only able to exclude emergent findings; 4, not ideal, but still able to exclude subtle findings; and 5, ideal quality). The primary outcome measure was image quality by age. Results A total of 878 photographs of 212 children (median age, 6 years; range,1-18 years) were included. Photographs of at least one eye were obtained in 190 children (89.6%) and in both eyes in 181 (85.3%). Median rating for ease of photography was 7. Photographs of some clinical value (grade ≥2) were obtained in 33% of children 3 years. High-quality photographs (grade 4 or 5) were obtained in both eyes in 7% of children <3 years, 57% of children ≥3 to <7 years, 85% of children ≥7 to <9 years, and 65% of children ≥9 years. The youngest patient with high-quality photographs in both eyes was 22 months. Conclusions Nonmydriatic fundus photographs of adequate quality can be obtained in children over age 3 and in some children as young as 22 months. PMID:22153402

  1. The effects of fundus photography on the multifocal electroretinogram.

    Science.gov (United States)

    Suresh, Sandip; Tienor, Brian J; Smith, Scott D; Lee, Michael S

    2016-02-01

    To determine the effect of flash fundus photography (FFP) on the multifocal electroretinogram (mfERG). Ten subjects underwent mfERG testing on three separate dates. Subjects received either mfERG without FFP, mfERG at 5 and 15 min after FFP, or mfERG at 30 and 45 min after FFP on each date. The FFP groups received 10 fundus photographs followed by mfERG testing, first of the right eye then of the left eye 10 min later. Data were averaged and analyzed in six concentric rings at each time point. Average amplitude and implicit times of the N1, P1, and N2 peaks for each concentric ring at each time point after FFP were compared to baseline. Flash fundus photography did not lead to a significant change of amplitude or implicit times of N1, P1, or N2 at 5 min after light exposure. These findings suggest that it is acceptable to perform mfERG testing without delay after performance of FFP.

  2. Fundus autofluorescence imaging of the white dot syndromes.

    Science.gov (United States)

    Yeh, Steven; Forooghian, Farzin; Wong, Wai T; Faia, Lisa J; Cukras, Catherine; Lew, Julie C; Wroblewski, Keith; Weichel, Eric D; Meyerle, Catherine B; Sen, Hatice Nida; Chew, Emily Y; Nussenblatt, Robert B

    2010-01-01

    To characterize the fundus autofluorescence (FAF) findings in patients with white dot syndromes (WDSs). Patients with WDSs underwent ophthalmic examination, fundus photography, fluorescein angiography, and FAF imaging. Patients were categorized as having no, minimal, or predominant foveal hypoautofluorescence. The severity of visual impairment was then correlated with the degree of foveal hypoautofluorescence. Fifty-five eyes of 28 patients with WDSs were evaluated. Visual acuities ranged from 20/12.5 to hand motions. Diagnoses included serpiginous choroidopathy (5 patients), birdshot retinochoroidopathy (10), multifocal choroiditis (8), relentless placoid chorioretinitis (1), presumed tuberculosis-associated serpiginouslike choroidopathy (1), acute posterior multifocal placoid pigment epitheliopathy (1), and acute zonal occult outer retinopathy (2). In active serpiginous choroidopathy, notable hyperautofluorescence in active disease distinguished it from the variegated FAF features of tuberculosis-associated serpiginouslike choroidopathy. The percentage of patients with visual acuity impairment of less than 20/40 differed among eyes with no, minimal, and predominant foveal hypoautofluorescence (P < .001). Patients with predominant foveal hypoautofluorescence demonstrated worse visual acuity than those with minimal or no foveal hypoautofluorescence (both P < .001). Fundus autofluorescence imaging is useful in the evaluation of the WDS. Visual acuity impairment is correlated with foveal hypoautofluorescence. Further studies are needed to evaluate the precise role of FAF imaging in the WDSs.

  3. Application of 3-dimensional printing technology to construct an eye model for fundus viewing study.

    Directory of Open Access Journals (Sweden)

    Ping Xie

    Full Text Available To construct a life-sized eye model using the three-dimensional (3D printing technology for fundus viewing study of the viewing system.We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs. Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D printing. Together with the main printed structure, polymethyl methacrylate(PMMA aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy.In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro's and Bakaraju's model eye, much smaller than Arianpour's model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro's eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value.The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus range viewing research.

  4. Application of 3-dimensional printing technology to construct an eye model for fundus viewing study.

    Science.gov (United States)

    Xie, Ping; Hu, Zizhong; Zhang, Xiaojun; Li, Xinhua; Gao, Zhishan; Yuan, Dongqing; Liu, Qinghuai

    2014-01-01

    To construct a life-sized eye model using the three-dimensional (3D) printing technology for fundus viewing study of the viewing system. We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs). Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD) software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D) printing. Together with the main printed structure, polymethyl methacrylate(PMMA) aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy. In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro's and Bakaraju's model eye, much smaller than Arianpour's model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro's eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value. The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus range viewing research.

  5. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  6. Canine and feline fundus photography and videography using a nonpatented 3D printed lens adapter for a smartphone.

    Science.gov (United States)

    Espinheira Gomes, Filipe; Ledbetter, Eric

    2018-05-11

    To describe an indirect funduscopy imaging technique for dogs and cats using low cost and widely available equipment: a smartphone, a three-dimensional (3D) printed indirect lens adapter, and a 40 diopters (D) indirect ophthalmoscopy lens. Fundus videography was performed in dogs and cats using a 40D indirect ophthalmoscopy lens and a smartphone fitted with a 3D printed indirect lens adapter. All animals were pharmacologically dilated with topical tropicamide 1% solution. Eyelid opening and video recording were performed using standard binocular indirect ophthalmoscopy technique. All videos were uploaded to a computer, and still images were selected and acquired for archiving purposes. Fundic images were manipulated to represent the true anatomy of the fundus. It was possible to promptly obtain good quality images from normal and diseased retinas using the nonpatented 3D printed, lens adapter for a smartphone. Fundic imaging using a smartphone can be performed with minimal investment. This simple imaging modality can be used by veterinary ophthalmologists and general practitioners to acquire, archive, and share images of the retina. The quality of images obtained will likely improve with developments in smartphone camera software and hardware. © 2018 American College of Veterinary Ophthalmologists.

  7. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  8. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  9. HST Solar Arrays photographed by Electronic Still Camera

    Science.gov (United States)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  10. Effect of isoproterenol, phenylephrine, and sodium nitroprusside on fundus pulsations in healthy volunteers.

    Science.gov (United States)

    Schmetterer, L; Wolzt, M; Salomon, A; Rheinberger, A; Unfried, C; Zanaschka, G; Fercher, A F

    1996-03-01

    Recently a laser interferometric method for topical measurement of fundus pulsations has been developed. Fundus pulsations in the macular region are caused by the inflow and outflow of blood into the choroid. The purpose of this work was to study the influence of a peripheral vasoconstricting (the alpha 1 adrenoceptor agonist phenylephrine), a predominantly positive inotropic (the non-specific beta adrenoceptor agonist isoproterenol), and a non-specific vasodilating (sodium nitroprusside) model drug on ocular fundus pulsations to determine reproducibility and sensitivity of the method. In a double masked randomised crossover study the drugs were administered in stepwise increasing doses to 10 male and nine female healthy volunteers. Systemic haemodynamic variables and fundus pulsations were measured at all infusion steps. Fundus pulsation increased during infusion of isoproterenol with statistical significance versus baseline at the lowest dose of 0.1 microgram/min. Neither peripheral vasoconstriction nor peripheral vasodilatation affected the ocular fundus pulsations. Measurements of fundus pulsations is a highly reproducible method in healthy subjects with low ametropy. Changes of local pulsatile ocular blood flow were detectable with our method following the infusion of isoproterenol. As systemic pharmacological vasodilatation or vasoconstriction did not change fundus pulsations, further experimental work has to be done to evaluate the sensitivity of the laser interferometric fundus pulsation measurement in various eye diseases.

  11. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  12. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  13. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  14. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  15. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  16. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    Directory of Open Access Journals (Sweden)

    Nogol Memari

    Full Text Available The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE, Structured Analysis of the Retina (STARE and Child Heart and Health Study in England (CHASE_DB1 datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  17. Is screening with digital imaging using one retinal view adequate?

    Science.gov (United States)

    Herbert, H M; Jordan, K; Flanagan, D W

    2003-05-01

    To compare the detection of diabetic retinopathy from digital images with slit-lamp biomicroscopy, and to determine whether British Diabetic Association (BDA) screening criteria are attained (>80% sensitivity, >95% specificity, &fashion. A single 45 degrees fundus image was obtained using the nonmydriatic digital camera. Each patient subsequently underwent slit-lamp biomicroscopy and diabetic retinopathy grading by a consultant ophthalmologist. Diabetic retinopathy and maculopathy was graded according to the Early Treatment of Diabetic Retinopathy Study. A total of 145 patients (288 eyes) were identified for screening. Of these, 26% of eyes had diabetic retinopathy, and eight eyes (3%) had sight-threatening diabetic retinopathy requiring treatment. The sensitivity for detection of any diabetic retinopathy was 38% and the specificity 95%. There was a 4% technical failure rate. There were 42/288 false negatives and 10/288 false positives. Of the 42 false negatives, 18 represented diabetic maculopathy, 20 represented peripheral diabetic retinopathy and four eyes had both macular and peripheral changes. Three eyes in the false-negative group (1% of total eyes) had sight-threatening retinopathy. There was good concordance between the two consultants (79% agreement on slit-lamp biomicroscopy and 84% on digital image interpretation). The specificity value and technical failure rate compare favourably with BDA guidelines. The low sensitivity for detection of any retinopathy reflects failure to detect minimal maculopathy and retinopathy outside the 45 degrees image. This could be improved by an additional nasal image and careful evaluation of macular images with a low threshold for slit-lamp biomicroscopy if image quality is poor.

  18. Nonmydriatic Ocular Fundus Photography in the Emergency Department: How It Can Benefit Neurologists.

    Science.gov (United States)

    Bruce, Beau B

    2015-10-01

    Examination of the ocular fundus is a critical aspect of the neurologic examination. For example, in patients with headache the ocular fundus examination is needed to uncover "red flags" suggestive of secondary etiologies. However, ocular fundus examination is infrequently and poorly performed in clinical practice. Nonmydriatic ocular fundus photography provides an alternative to direct ophthalmoscopy that has been studied as part of the Fundus Photography versus Ophthalmoscopy Trial Outcomes in the Emergency Department (FOTO-ED) Study. Herein, the results of the FOTO-ED study are reviewed with a particular focus on the study's implications for the acute care of patients presenting with headache and focal neurologic deficits. In headache patients, not only optic disc edema and optic disc pallor were observed as would be expected, but also a large number of abnormalities associated with hypertension. Based upon subjects with focal neurologic deficits, the FOTO-ED study suggests that the ocular fundus examination may assist with the triage of patients presenting with suspected transient ischemic attack. Continued advances in the ease and portability of nonmydriatic fundus photography will hopefully help to restore ocular fundus examination as a routinely performed component of all neurologic examinations. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. Non-mydriatic ocular fundus photography in the emergency department: how it can benefit neurologists

    Science.gov (United States)

    Bruce, Beau B.

    2016-01-01

    Examination of the ocular fundus is a critical aspect of the neurological examination. For example, in patients with headache the ocular fundus examination is needed to uncover “red flags” suggestive of secondary etiologies. However, ocular fundus examination is infrequently and poorly performed in clinical practice. Non-mydriatic ocular fundus photography provides an alternative to direct ophthalmoscopy that has been studied as part of the Fundus photography vs. Ophthalmoscopy Trial Outcomes in the Emergency Department (FOTO-ED) study. Herein, we review the results of the FOTO-ED study with a particular focus on the study's implications for the acute care of patients presenting with headache and focal neurologic deficits. In headache patients, we not only observed optic disc edema and optic disc pallor as would be expected, but also a large number of abnormalities associated with hypertension. Based upon subjects with focal neurological deficits, the FOTO-ED study suggests that the ocular fundus examination may assist with the triage of patients presenting with suspected transient ischemic attack. Continued advances in the ease and portability of non-mydriatic fundus photography will hopefully help to restore ocular fundus examination as a routinely performed component of all neurological examinations. PMID:26444394

  20. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  1. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  2. Camera-Model Identification Using Markovian Transition Probability Matrix

    Science.gov (United States)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  3. Comparison of Color Fundus Photography, Infrared Fundus Photography, and Optical Coherence Tomography in Detecting Retinal Hamartoma in Patients with Tuberous Sclerosis Complex.

    Science.gov (United States)

    Bai, Da-Yong; Wang, Xu; Zhao, Jun-Yang; Li, Li; Gao, Jun; Wang, Ning-Li

    2016-05-20

    A sensitive method is required to detect retinal hamartomas in patients with tuberous sclerosis complex (TSC). The aim of the present study was to compare the color fundus photography, infrared imaging (IFG), and optical coherence tomography (OCT) in the detection rate of retinal hamartoma in patients with TSC. This study included 11 patients (22 eyes) with TSC, who underwent color fundus photography, IFG, and spectral-domain OCT to detect retinal hamartomas. TSC1 and TSC2RESULTS: The mean age of the 11 patients was 8.0 ± 2.1 years. The mean spherical equivalent was -0.55 ± 1.42 D by autorefraction with cycloplegia. In 11 patients (22 eyes), OCT, infrared fundus photography, and color fundus photography revealed 26, 18, and 9 hamartomas, respectively. The predominant hamartoma was type I (55.6%). All the hamartomas that detected by color fundus photography or IFG can be detected by OCT. Among the methods of color fundus photography, IFG, and OCT, the OCT has higher detection rate for retinal hamartoma in TSC patients; therefore, OCT might be promising for the clinical diagnosis of TSC.

  4. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  5. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  6. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  7. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  8. A thresholding based technique to extract retinal blood vessels from fundus images

    Directory of Open Access Journals (Sweden)

    Jyotiprava Dash

    2017-12-01

    Full Text Available Retinal imaging has become the significant tool among all the medical imaging technology, due to its capability to extract many data which is linked to various eye diseases. So, the accurate extraction of blood vessel is necessary that helps the eye care specialists and ophthalmologist to identify the diseases at the early stages. In this paper, we have proposed a computerized technique for extraction of blood vessels from fundus images. The process is conducted in three phases: (i pre-processing where the image is enhanced using contrast limited adaptive histogram equalization and median filter, (ii segmentation using mean-C thresholding to extract retinal blood vessels, (iii post-processing where morphological cleaning operation is used to remove isolated pixels. The performance of the proposed method is tested on and experimental results show that our method achieve an accuracies of 0.955 and 0.954 on Digital retinal images for vessel extraction (DRIVE and Child heart and health study in England (CHASE_DB1 databases respectively.

  9. Biomedical image acquisition system using a gamma camera

    International Nuclear Information System (INIS)

    Jara B, A.T.; Sevillano, J.; Del Carpio S, J.A.

    2003-01-01

    A gamma camera images PC acquisition board has been developed. The digital system has been described using VHDL and has been synthesized and implemented in a Altera Max7128S CPLD and two PALs 16L8. The use of programmable-logic technologies has afforded a higher scale integration and a reduction of the digital delays and also has allowed us to modify and bring up to date the entire digital design easily. (orig.)

  10. Rapid grading of fundus photographs for diabetic retinopathy using crowdsourcing.

    Science.gov (United States)

    Brady, Christopher J; Villanti, Andrea C; Pearson, Jennifer L; Kirchner, Thomas R; Gupta, Omesh P; Shah, Chirag P

    2014-10-30

    Screening for diabetic retinopathy is both effective and cost-effective, but rates of screening compliance remain suboptimal. As screening improves, new methods to deal with screening data may help reduce the human resource needs. Crowdsourcing has been used in many contexts to harness distributed human intelligence for the completion of small tasks including image categorization. Our goal was to develop and validate a novel method for fundus photograph grading. An interface for fundus photo classification was developed for the Amazon Mechanical Turk crowdsourcing platform. We posted 19 expert-graded images for grading by Turkers, with 10 repetitions per photo for an initial proof-of-concept (Phase I). Turkers were paid US $0.10 per image. In Phase II, one prototypical image from each of the four grading categories received 500 unique Turker interpretations. Fifty draws of 1-50 Turkers were then used to estimate the variance in accuracy derived from randomly drawn samples of increasing crowd size to determine the minimum number of Turkers needed to produce valid results. In Phase III, the interface was modified to attempt to improve Turker grading. Across 230 grading instances in the normal versus abnormal arm of Phase I, 187 images (81.3%) were correctly classified by Turkers. Average time to grade each image was 25 seconds, including time to review training images. With the addition of grading categories, time to grade each image increased and percentage of images graded correctly decreased. In Phase II, area under the curve (AUC) of the receiver-operator characteristic (ROC) indicated that sensitivity and specificity were maximized after 7 graders for ratings of normal versus abnormal (AUC=0.98) but was significantly reduced (AUC=0.63) when Turkers were asked to specify the level of severity. With improvements to the interface in Phase III, correctly classified images by the mean Turker grade in four-category grading increased to a maximum of 52.6% (10/19 images

  11. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  12. Statistical characterization and segmentation of drusen in fundus images.

    Science.gov (United States)

    Santos-Villalobos, H; Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E

    2011-01-01

    Age related Macular Degeneration (AMD) is a disease of the retina associated with aging. AMD progression in patients is characterized by drusen, pigmentation changes, and geographic atrophy, which can be seen using fundus imagery. The level of AMD is characterized by standard scaling methods, which can be somewhat subjective in practice. In this work we propose a statistical image processing approach to segment drusen with the ultimate goal of characterizing the AMD progression in a data set of longitudinal images. The method characterizes retinal structures with a statistical model of the colors in the retina image. When comparing the segmentation results of the method between longitudinal images with known AMD progression and those without, the method detects progression in our longitudinal data set with an area under the receiver operating characteristics curve of 0.99.

  13. Does Fundus Fluorescein Angiography Procedure Affect Ocular Pulse Amplitude?

    Directory of Open Access Journals (Sweden)

    Gökhan Pekel

    2013-01-01

    Full Text Available Purpose. This study examines the effects of fundus fluorescein angiography (FFA procedure on ocular pulse amplitude (OPA and intraocular pressure (IOP. Materials and Methods. Sixty eyes of 30 nonproliferative diabetic retinopathy patients (15 males, 15 females were included in this cross-sectional case series. IOP and OPA were measured with the Pascal dynamic contour tonometer before and after 5 minutes of intravenous fluorescein dye injection. Results. Pre-FFA mean OPA value was  mmHg and post-FFA mean OPA value was  mmHg (. Pre-FFA mean IOP value was  mmHg and post-FFA mean IOP value was  mmHg (. Conclusion. Although both mean OPA and IOP values were decreased after FFA procedure, the difference was not statistically significant. This clinical trial is registered with Australian New Zealand Clinical Trials Registry number ACTRN12613000433707.

  14. Statistical Characterization and Segmentation of Drusen in Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Karnowski, Thomas Paul [ORNL; Aykac, Deniz [ORNL; Giancardo, Luca [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Nichols, Trent L [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Age related Macular Degeneration (AMD) is a disease of the retina associated with aging. AMD progression in patients is characterized by drusen, pigmentation changes, and geographic atrophy, which can be seen using fundus imagery. The level of AMD is characterized by standard scaling methods, which can be somewhat subjective in practice. In this work we propose a statistical image processing approach to segment drusen with the ultimate goal of characterizing the AMD progression in a data set of longitudinal images. The method characterizes retinal structures with a statistical model of the colors in the retina image. When comparing the segmentation results of the method between longitudinal images with known AMD progression and those without, the method detects progression in our longitudinal data set with an area under the receiver operating characteristics curve of 0.99.

  15. Digital security technology simplified.

    Science.gov (United States)

    Scaglione, Bernard J

    2007-01-01

    Digital security technology is making great strides in replacing analog and other traditional security systems including CCTV card access, personal identification and alarm monitoring applications. Like any new technology, the author says, it is important to understand its benefits and limitations before purchasing and installing, to ensure its proper operation and effectiveness. This article is a primer for security directors on how digital technology works. It provides an understanding of the key components which make up the foundation for digital security systems, focusing on three key aspects of the digital security world: the security network, IP cameras and IP recorders.

  16. Radiographic film digitizing devices

    International Nuclear Information System (INIS)

    McFee, W.H.

    1988-01-01

    Until recently, all film digitizing devices for use with teleradiology or picture archiving and communication systems used a video camera to capture an image of the radiograph for subsequent digitization. The development of film digitizers that use a laser beam to scan the film represents a significant advancement in digital technology, resulting in improved image quality compared with video scanners. This paper discusses differences in resolution, efficiency, reliability, and the cost between these two types of devices. The results of a modified receiver operating characteristic comparison study of a video scanner and a laser scanner manufactured by the same company are also discussed

  17. Crowdsourcing to Evaluate Fundus Photographs for the Presence of Glaucoma.

    Science.gov (United States)

    Wang, Xueyang; Mudie, Lucy I; Baskaran, Mani; Cheng, Ching-Yu; Alward, Wallace L; Friedman, David S; Brady, Christopher J

    2017-06-01

    To assess the accuracy of crowdsourcing for grading optic nerve images for glaucoma using Amazon Mechanical Turk before and after training modules. Images (n=60) from 2 large population studies were graded for glaucoma status and vertical cup-to-disc ratio (VCDR). In the baseline trial, users on Amazon Mechanical Turk (Turkers) graded fundus photos for glaucoma and VCDR after reviewing annotated example images. In 2 additional trials, Turkers viewed a 26-slide PowerPoint training or a 10-minute video training and passed a quiz before being permitted to grade the same 60 images. Each image was graded by 10 unique Turkers in all trials. The mode of Turker grades for each image was compared with an adjudicated expert grade to determine accuracy as well as the sensitivity and specificity of Turker grading. In the baseline study, 50% of the images were graded correctly for glaucoma status and the area under the receiver operating characteristic (AUROC) was 0.75 [95% confidence interval (CI), 0.64-0.87]. Post-PowerPoint training, 66.7% of the images were graded correctly with AUROC of 0.86 (95% CI, 0.78-0.95). Finally, Turker grading accuracy was 63.3% with AUROC of 0.89 (95% CI, 0.83-0.96) after video training. Overall, Turker VCDR grades for each image correlated with expert VCDR grades (Bland-Altman plot mean difference=-0.02). Turkers graded 60 fundus images quickly and at low cost, with grading accuracy, sensitivity, and specificity, all improving with brief training. With effective education, crowdsourcing may be an efficient tool to aid in the identification of glaucomatous changes in retinal images.

  18. Characteristics of Fundus Autofluorescence in Active Polypoidal Choroidal Vasculopathy

    Directory of Open Access Journals (Sweden)

    Zafer Öztaş

    2016-08-01

    Full Text Available Objectives: To define characteristic fundus autofluorescence (FAF findings in eyes with active polypoidal choroidal vasculopathy (PCV. Materials and Methods: Thirty-five eyes of 29 patients with active PCV who were diagnosed at Ege University Faculty of Medicine, Department of Ophthalmology, Retina Division between January 2012 and November 2014 were included in the study. All the patients underwent a complete ophthalmological examination including fundus photography, spectral-domain optical coherence tomography, fluorescein angiography, FAF photography, and indocyanine green angiography (ICGA. ICGA was used to diagnose active PCV and identify lesion components. FAF findings were described at the retinal site of the corresponding lesions identified and diagnosed using ICGA. Results: The mean age of the 29 study patients (15 men, 14 women was 64.6±7.5 years (range, 54-82 years. ICGA revealed active PCV in 35 eyes, consisting of polypoid lesions in 11 eyes (31.4%, branching vascular networks (BVN in 10 eyes (28.6%, and a combination of polypoid lesions and BVNs in 14 eyes (40%. On FAF images, 4 different patterns were detected at the corresponding retinal sites of 25 polypoid lesions detected by ICGA: confluent hypoautofluorescence with a hyperautofluorescent ring in 18 eyes (72%, hyperautofluorescence with hypoautofluorescent ring in 2 eyes (8%, confluent hypoautofluorescence in 1 eye (4%, and granular hypoautofluorescence in 1 eye (4%. The remaining 3 eyes (12% demonstrated blocked hypoautofluorescence because of the excessive hemorrhaging in the macula. The FAF images showed the granular hypoautofluorescent FAF pattern in all 24 BVNs (100% consistent with the location of the lesions on ICGA. Conclusion: The typical PCV lesions, polypoid lesions and BVNs had characteristic autofluorescent findings on FAF imaging. Non-invasive, quick, and repeatable FAF imaging can be considered a reliable and helpful diagnostic technique for the diagnosis of

  19. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  20. Aplicación de la fotografía métrica en edificación mediante el uso de la cámara digital convencional: un caso de estudio aplicado al patrimonio arqueológico = Application of metric photography in building using the conventional digital camera: a case study applied to archaeological heritage

    Directory of Open Access Journals (Sweden)

    Jose Antonio Barrera Vera

    2017-04-01

    Full Text Available El análisis estratigráfico constituye una herramienta de diagnosis indispensable en obras de arqueología, que permite descifrar a arqueólogos, historiadores y antropólogos la disposición e interrelación entre los diferentes estratos y la ordenación cronológica de los restos hallados. En este campo, la fotogrametría realizada con cámara digital convencional y software de amplia difusión constituye una alternativa versátil, eficiente y asequible frente a las técnicas convencionales de representación, basadas en procedimientos artesanales y cargadas de subjetividad, cuyas principales limitaciones son analizadas. En este artículo se establecen una sencilla metodología y un modelo sistemático para la documentación y preservación de unidades estratigráficas en excavaciones arqueológicas, compatibles con la técnica de análisis estratigráfico basada en la matriz Harris. La validez y posibilidades del método han sido constatadas en el proyecto de intervención arqueológica desarrollado en la Capilla Real de la Catedral de Sevilla. Abstract The stratigraphic analysis constitutes an essential diagnostic tool in archelogy works, which allows the archaeologists, historians and anthropologists to decipher the arrangement and interrelation between the different strata and the chronological ordering of the remains found. In this field, the photogrammetry realized with conventional digital camera and software of wide diffusion constitutes a versatile alternative, efficient and affordable in front of the conventional techniques of representation, based on artisan and loaded procedures of subjectivity, whose main limitations are analyzed. This article establishes a simple methodology and a systematic model for the documentation and preservation of stratigraphic units in archaeological excavations, compatible with the technique of stratigraphic analysis based on the Harris matrix. The validity and possibilities of the method have been

  1. Prevention of increased abnormal fundus autofluorescence with blue light-filtering intraocular lenses.

    Science.gov (United States)

    Nagai, Hiroyuki; Hirano, Yoshio; Yasukawa, Tsutomu; Morita, Hiroshi; Nozaki, Miho; Wolf-Schnurrbusch, Ute; Wolf, Sebastian; Ogura, Yuichiro

    2015-09-01

    To observe changes in fundus autofluorescence 2 years after implantation of blue light-filtering (yellow-tinted) and ultraviolet light-filtering (colorless) intraocular lenses (IOLs). Department of Ophthalmology and Visual Science, Nagoya City University Graduate School of Medical Sciences, Nagoya, Japan, and the Department of Ophthalmology, University of Bern, Bern, Switzerland. Prospective comparative observational study. Patients were enrolled who had cataract surgery with implantation of a yellow-tinted or colorless IOL and for whom images were obtained on which the fundus autofluorescence was measurable using the Heidelberg Retina Angiogram 2 postoperatively. The fundus autofluorescence in the images was classified into 8 abnormal patterns based on the classification of the International Fundus Autofluorescence Classification Group, The presence of normal fundus autofluorescence, geographic atrophy, and wet age-related macular degeneration (AMD) also was recorded. The fundus findings at baseline and 2 years postoperatively were compared. Fifty-two eyes with a yellow-tinted IOL and 79 eyes with a colorless IOL were included. Abnormal fundus autofluorescence did not develop or increase in the yellow-tinted IOL group; however, progressive abnormal fundus autofluorescence developed or increased in 12 eyes (15.2%) in the colorless IOL group (P = .0016). New drusen, geographic atrophy, and choroidal neovascularization were observed mainly in the colorless IOL group. The incidence of AMD was statistically significantly higher in the colorless IOL group (P = .042). Two years after cataract surgery, significant differences were seen in the progression of abnormal fundus autofluorescence between the 2 groups. The incidence of AMD was lower in eyes with a yellow-tinted IOL. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  2. Imperceptible watermarking for security of fundus images in tele-ophthalmology applications and computer-aided diagnosis of retina diseases.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore

    2017-12-01

    The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  4. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  5. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  6. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  7. Digital imaging primer

    CERN Document Server

    Parkin, Alan

    2016-01-01

    Digital Imaging targets everyyone with an interest in digital imaging, be they professional or private, who uses even quite modest equipment such as a PC, digital camera and scanner, a graphics editor such as Paint, and an inkjet printer. Uniquely, it is intended to fill the gap between highly technical texts for academics (with access to expensive equipment) and superficial introductions for amateurs. The four-part treatment spans theory, technology, programs and practice. Theory covers integer arithmetic, additive and subtractive color, greyscales, computational geometry, and a new presentation of discrete Fourier analysis; Technology considers bitmap file structures, scanners, digital cameras, graphic editors, and inkjet printers; Programs develops several processing tools for use in conjunction with a standard Paint graphics editor and supplementary processing tools; Practice discusses 1-bit, greyscale, 4-bit, 8-bit, and 24-bit images for the practice section. Relevant QBASIC code is supplied an accompa...

  8. Integration of USB and firewire cameras in machine vision applications

    Science.gov (United States)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  9. Interpretations of Fundus Autofluorescence from Studies of the Bisretinoids of the Retina

    OpenAIRE

    Sparrow, Janet R.; Yoon, Kee Dong; Wu, Yalin; Yamamoto, Kazunori

    2010-01-01

    Elevated fundus autofluorescence signals can reflect enhanced lipofuscin in RPE cells, augmented fluorescence due to photooxidation, and/or excess bisretinoid fluorophores in photoreceptor cells due to mishandling of vitamin A aldehyde by dysfunctional cells.

  10. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update

    Directory of Open Access Journals (Sweden)

    Bing Zhang

    2017-11-01

    Full Text Available Adaptive optics scanning laser ophthalmoscopy (AO-SLO has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography. Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors (cones and rods, fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.

  11. Use of a color CMOS camera as a colorimeter

    Science.gov (United States)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  12. Reliability and reproducibility of disc-foveal angle measurements by non-mydriatic fundus photography

    OpenAIRE

    Le Jeune, Caroline; Chebli, Fayçal; Leon, Lorette; Anthoine, Emmanuelle; Weber, Michel; Péchereau, Alain; Lebranchu, Pierre

    2018-01-01

    Purpose Abnormal torsion could be associated with cyclovertical strabismus, but torsion measurements are not reliable in children. To assess an objective fundus torsion evaluation in a paediatric population, we used Non-Mydriatic Fundus photography (NMFP) in healthy and cyclovertical strabismus patients to evaluate the disc-foveal angle over time and observers. Methods We used a retrospective set of NMFP including 24 A or V-pattern strabismus and 27 age-matched normal children (mean age 6.4 a...

  13. Incidence of fundus oculi and neurocranium changes in patients with migraine in the community

    Directory of Open Access Journals (Sweden)

    M. Musić

    2005-02-01

    Full Text Available The purpose of this study was to monitor possible changes of fundus oculi and neurocranim in patients with migraine according to their gender, place of residence and education grade. In a significantpercentageofourpatients with migraine there were no changes of fundus oculi, CT neurocranium or craniogram. The reason of significantmigrainemorbidityinruralzones,inpatientswithlowereducationgrade,andalmostequalproportionof male and female patients with tendency of the disease to older age groups, remains unclear. There is a need for further research.

  14. Effect of isoproterenol, phenylephrine, and sodium nitroprusside on fundus pulsations in healthy volunteers.

    OpenAIRE

    Schmetterer, L; Wolzt, M; Salomon, A; Rheinberger, A; Unfried, C; Zanaschka, G; Fercher, A F

    1996-01-01

    AIMS/BACKGROUND: Recently a laser interferometric method for topical measurement of fundus pulsations has been developed. Fundus pulsations in the macular region are caused by the inflow and outflow of blood into the choroid. The purpose of this work was to study the influence of a peripheral vasoconstricting (the alpha 1 adrenoceptor agonist phenylephrine), a predominantly positive inotropic (the non-specific beta adrenoceptor agonist isoproterenol), and a non-specific vasodilating (sodium n...

  15. Imaging autofluorescence temporal signatures of the human ocular fundus in vivo

    Science.gov (United States)

    Papour, Asael; Taylor, Zachary; Stafsudd, Oscar; Tsui, Irena; Grundfest, Warren

    2015-11-01

    We demonstrate real-time in vivo fundus imaging capabilities of our fluorescence lifetime imaging technology for the first time. This implementation of lifetime imaging uses light emitting diodes to capture full-field images capable of showing direct tissue contrast without executing curve fitting or lifetime calculations. Preliminary results of fundus images are presented, investigating autofluorescence imaging potential of various retina biomarkers for early detection of macular diseases.

  16. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update

    OpenAIRE

    Zhang, Bing; Li, Ni; Kang, Jie; He, Yi; Chen, Xiao-Ming

    2017-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherenc...

  17. Application of Optical Measurement Techniques During Stages of Pregnancy: Use of Phantom High Speed Cameras for Digital Image Correlation (D.I.C.) During Baby Kicking and Abdomen Movements

    Science.gov (United States)

    Gradl, Paul

    2016-01-01

    Paired images were collected using a projected pattern instead of standard painting of the speckle pattern on her abdomen. High Speed cameras were post triggered after movements felt. Data was collected at 120 fps -limited due to 60hz frequency of projector. To ensure that kicks and movement data was real a background test was conducted with no baby movement (to correct for breathing and body motion).

  18. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  19. INVESTIGATING THE SUITABILITY OF MIRRORLESS CAMERAS IN TERRESTRIAL PHOTOGRAMMETRIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    A. H. Incekara

    2017-11-01

    Full Text Available Digital single-lens reflex cameras (DSLR which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700 and the other without a mirror (Sony a6000, were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  20. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  1. Scientific Objectives of Small Carry-on Impactor (SCI) and Deployable Camera 3 Digital (DCAM3-D): Observation of an Ejecta Curtain and a Crater Formed on the Surface of Ryugu by an Artificial High-Velocity Impact

    Science.gov (United States)

    Arakawa, M.; Wada, K.; Saiki, T.; Kadono, T.; Takagi, Y.; Shirai, K.; Okamoto, C.; Yano, H.; Hayakawa, M.; Nakazawa, S.; Hirata, N.; Kobayashi, M.; Michel, P.; Jutzi, M.; Imamura, H.; Ogawa, K.; Sakatani, N.; Iijima, Y.; Honda, R.; Ishibashi, K.; Hayakawa, H.; Sawada, H.

    2017-07-01

    The Small Carry-on Impactor (SCI) equipped on Hayabusa2 was developed to produce an artificial impact crater on the primitive Near-Earth Asteroid (NEA) 162173 Ryugu (Ryugu) in order to explore the asteroid subsurface material unaffected by space weathering and thermal alteration by solar radiation. An exposed fresh surface by the impactor and/or the ejecta deposit excavated from the crater will be observed by remote sensing instruments, and a subsurface fresh sample of the asteroid will be collected there. The SCI impact experiment will be observed by a Deployable CAMera 3-D (DCAM3-D) at a distance of ˜1 km from the impact point, and the time evolution of the ejecta curtain will be observed by this camera to confirm the impact point on the asteroid surface. As a result of the observation of the ejecta curtain by DCAM3-D and the crater morphology by onboard cameras, the subsurface structure and the physical properties of the constituting materials will be derived from crater scaling laws. Moreover, the SCI experiment on Ryugu gives us a precious opportunity to clarify effects of microgravity on the cratering process and to validate numerical simulations and models of the cratering process.

  2. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  3. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  4. The Role of Fundus Autofluorescence in Late-Onset Retinitis Pigmentosa (LORP) Diagnosis

    Science.gov (United States)

    Lee, Tamara J.; Hwang, John C.; Chen, Royce W. S.; Lima, Luiz H.; Wang, Nan-Kai; Tosi, Joaquin; Freund, K. Bailey; Yannuzzi, Lawrence A.; Tsang, Stephen H.

    2015-01-01

    Purpose To demonstrate the utility and characteristics of fundus autofluorescence in late-onset retinitis pigmentosa. Methods Observational case series. Patients diagnosed with late-onset retinitis pigmentosa were identified retrospectively in an institutional setting. Twelve eyes of six patients were identified and medical records were reviewed. Results All patients presented with slowly progressive peripheral field loss and initial clinical examination revealed only subtle retinal changes. There was a notable lack of intraretinal pigment migration in all patients. Five out of six patients underwent magnetic resonance imaging of the brain to rule out intracranial processes and all were referred from another ophthalmologist for further evaluation. Fundus autofluorescence was ultimately employed in all patients and revealed more extensive retinal pathology than initially appreciated on clinical examination. Fundus autofluorescence directed the workup toward a retinal etiology in all cases and led to the eventual diagnosis of late-onset retinitis pigmentosa through electroretinogram testing. Conclusion Fundus autofluorescence may be a more sensitive marker for retinal pathology than stereo fundus biomicroscopy alone in late-onset retinitis pigmentosa. Early use of fundus autofluorescence imaging in the evaluation of patients with subtle retinal lesions and complaints of peripheral field loss may be an effective strategy for timely and cost-efficient diagnosis. PMID:23899229

  5. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  6. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  7. Comparison of low-cost handheld retinal camera and traditional table top retinal camera in the detection of retinal features indicating a risk of cardiovascular disease

    Science.gov (United States)

    Joshi, V.; Wigdahl, J.; Nemeth, S.; Zamora, G.; Ebrahim, E.; Soliz, P.

    2018-02-01

    Retinal abnormalities associated with hypertensive retinopathy are useful in assessing the risk of cardiovascular disease, heart failure, and stroke. Assessing these risks as part of primary care can lead to a decrease in the incidence of cardiovascular disease-related deaths. Primary care is a resource limited setting where low cost retinal cameras may bring needed help without compromising care. We compared a low-cost handheld retinal camera to a traditional table top retinal camera on their optical characteristics and performance to detect hypertensive retinopathy. A retrospective dataset of N=40 subjects (28 with hypertensive retinopathy, 12 controls) was used from a clinical study conducted at a primary care clinic in Texas. Non-mydriatic retinal fundus images were acquired using a Pictor Plus hand held camera (Volk Optical Inc.) and a Canon CR1-Mark II tabletop camera (Canon USA) during the same encounter. The images from each camera were graded by a licensed optometrist according to the universally accepted Keith-Wagener-Barker Hypertensive Retinopathy Classification System, three weeks apart to minimize memory bias. The sensitivity of the hand-held camera to detect any level of hypertensive retinopathy was 86% compared to the Canon. Insufficient photographer's skills produced 70% of the false negative cases. The other 30% were due to the handheld camera's insufficient spatial resolution to resolve the vascular changes such as minor A/V nicking and copper wiring, but these were associated with non-referable disease. Physician evaluation of the performance of the handheld camera indicates it is sufficient to provide high risk patients with adequate follow up and management.

  8. The Digital Divide

    Science.gov (United States)

    Hudson, Hannah Trierweiler

    2011-01-01

    Megan is a 14-year-old from Nebraska who just started ninth grade. She has her own digital camera, cell phone, Nintendo DS, and laptop, and one or more of these devices is usually by her side. Compared to the interactions and exploration she's engaged in at home, Megan finds the technology in her classroom falls a little flat. Most of the…

  9. Fundus oculi pigmentation studies simulating the fs-LASIK process

    Energy Technology Data Exchange (ETDEWEB)

    Sander, M; Tetz, M R [Berlin Eye Research Institute, Alt Moabit 101b, 10559 Berlin (Germany); Minet, O; Zabarylo, U [Charite Centrum 6, Arbeitsgruppe Medizinische Physik/Optische Diagnostik, Fabeckstrasse 60–62, 14195 Berlin (Germany); Mueller, M [Augenklinik Ahaus, Am Schlossgraben 13, 48683 Ahaus (Germany)

    2012-06-15

    The femtosecond-laser in situ keratomileusis (fs-LASIK) technique has successfully entered the refractive surgery market to correct ametropia by cutting transparent corneal tissue with ultra-short laser pulses based on photodisruption. The laser pulses in the near infrared range (NIR) generate a laser-induced breakdown (LIOB) in the cornea. By propagating through the eye, a certain amount of the pulse is deposited in the cornea and the remaining energy interacts with the strong absorbing tissue behind. Due to the absorption by the retinal pigment epithelium and the transfer of the thermal energy to surrounding tissue, the transmitted energy can induce damage to the retina. The aim of this project was to find out the threshold influences concerning the tissue and the correlation between the results of the macroscopical appraisal and the fundus oculi pigmentation by simulating the fs-LASIK procedure with two various laser systems in the continuous wave (CW) and fs-regime. Therefore ex-vivo determinations were carried out macroscopically and histopathologically on porcine tissue.

  10. Fundus oculi pigmentation studies simulating the fs-LASIK process

    International Nuclear Information System (INIS)

    Sander, M; Tetz, M R; Minet, O; Zabarylo, U; Mueller, M

    2012-01-01

    The femtosecond-laser in situ keratomileusis (fs-LASIK) technique has successfully entered the refractive surgery market to correct ametropia by cutting transparent corneal tissue with ultra-short laser pulses based on photodisruption. The laser pulses in the near infrared range (NIR) generate a laser-induced breakdown (LIOB) in the cornea. By propagating through the eye, a certain amount of the pulse is deposited in the cornea and the remaining energy interacts with the strong absorbing tissue behind. Due to the absorption by the retinal pigment epithelium and the transfer of the thermal energy to surrounding tissue, the transmitted energy can induce damage to the retina. The aim of this project was to find out the threshold influences concerning the tissue and the correlation between the results of the macroscopical appraisal and the fundus oculi pigmentation by simulating the fs-LASIK procedure with two various laser systems in the continuous wave (CW) and fs-regime. Therefore ex-vivo determinations were carried out macroscopically and histopathologically on porcine tissue

  11. Clinical relevance of quantified fundus autofluorescence in diabetic macular oedema.

    Science.gov (United States)

    Yoshitake, S; Murakami, T; Uji, A; Unoki, N; Dodo, Y; Horii, T; Yoshimura, N

    2015-05-01

    To quantify the signal intensity of fundus autofluorescence (FAF) and evaluate its association with visual function and optical coherence tomography (OCT) findings in diabetic macular oedema (DMO). We reviewed 103 eyes of 78 patients with DMO and 30 eyes of 22 patients without DMO. FAF images were acquired using Heidelberg Retina Angiograph 2, and the signal levels of FAF in the individual subfields of the Early Treatment Diabetic Retinopathy Study grid were measured. We evaluated the association between quantified FAF and the logMAR VA and OCT findings. One hundred and three eyes with DMO had lower FAF signal intensity levels in the parafoveal subfields compared with 30 eyes without DMO. The autofluorescence intensity in the parafoveal subfields was associated negatively with logMAR VA and the retinal thickness in the corresponding subfields. The autofluorescence levels in the parafoveal subfield, except the nasal subfield, were lower in eyes with autofluorescent cystoid spaces in the corresponding subfield than in those without autofluorescent cystoid spaces. The autofluorescence level in the central subfield was related to foveal cystoid spaces but not logMAR VA or retinal thickness in the corresponding area. Quantified FAF in the parafovea has diagnostic significance and is clinically relevant in DMO.

  12. Fundus Autofluorescence and Spectral Domain OCT in Central Serous Chorioretinopathy

    Directory of Open Access Journals (Sweden)

    Luiz Roisman

    2011-01-01

    Full Text Available Background. To describe the standard autofluorescence (FAF, the near infrared autofluorescence (NIA and optical coherence tomography (OCT patterns in central serous chorioretinopathy, correlating them with fluorescein angiography. Methods. Cross-sectional observational study, in which patients with at least seven months of CSC underwent ophthalmologic examination, fundus photography, FAF, NIA, fluorescein angiography (FA, and spectral-domain OCT. Results. Seventeen eyes of thirteen patients were included. The presentation features were a mottled hyperFAF in the detached area and areas with pigment mottling. NIA images showed areas of hyperNIA similar to FAF and localized areas of hypoNIA, which correlated with the points of leakage in the FA. OCT showed pigment epithelium detachment at the location of these hypoNIA spots. Discussion. FAF showed increased presence of fluorophores in the area of retinal detachment, which is believed to appear secondary to lipofuscin accumulation in the RPE or the presence of debris in the subretinal fluid. NIA has been related to the choroidal melanin content and there were areas of both increased and decreased NIA, which could be explained by damage ahead the retina, basically RPE and choroid. These findings, along with the PEDs found in the areas of hypoNIA, support the notion of a primary choroidal disease in CSC.

  13. Interactive segmentation for geographic atrophy in retinal fundus images.

    Science.gov (United States)

    Lee, Noah; Smith, R Theodore; Laine, Andrew F

    2008-10-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12-21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of pathological images still remains an unsolved problem. In this paper we leverage the watershed transform and generalized non-linear gradient operators for interactive segmentation and present an intuitive and simple approach for geographic atrophy segmentation. We compare our approach with the state of the art random walker [5] algorithm for interactive segmentation using ROC statistics. Quantitative evaluation experiments on 100 FAF images show a mean sensitivity/specificity of 98.3/97.7% for our approach and a mean sensitivity/specificity of 88.2/96.6% for the random walker algorithm.

  14. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  15. [The cell phones as devices for the ocular fundus documentation].

    Science.gov (United States)

    Němčanský, J; Kopecký, A; Timkovič, J; Mašek, P

    2014-12-01

    To present our experience with "smart phones" when examining and documenting human eyes. From September to October 2013 fifteen patients (8 men, 7 women) eye fundus was examined, an average age during the examination was 58 year (ranging from 20-65 years). The photo-documentation was performed with dilated pupils (tropicamid hydrochloridum 1% eye drops) with mobile phone Samsung Galaxy Nexus with the operating system Android 4.3 (Google Inc., Mountain View, CA, USA) and iPhone 4 with the operating system 7.0.4 (Apple Inc., Loop Cupertino, CA, USA), and with 20D lens (Volk Optical Inc., Mentor, OH, USA). The images of the retina taken with a mobile phone and the spherical lens are of a very good quality, precise and reproducible. Learning this technique is easy and fast, the learning curve is steep. Photo-documentation of retina with a mobile phone is a safe, time-saving, easy-to-learn technique, which may be used in a routine ophthalmologic practice. The main advantage of this technique is availability, small size and easy portability of the devices.

  16. Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram

    Science.gov (United States)

    Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad

    Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.

  17. Commercial Digital Camera to Estimate Postharvest Leaf Area Index in Vitis vinifera L. cv. Cabernet Sauvignon on a Vertical Trellis Uso de una Cámara Digital Comercial para Estimar el Índice de Área Foliar en Vitis vinifera L. cv. Cabernet Sauvignon en Poscosecha Conducida en Espaldera Vertical

    Directory of Open Access Journals (Sweden)

    Miguel Espinosa L.

    2010-06-01

    Full Text Available The leaf area index (LAI of a vineyard (Vitis vinifera L. cv. Cabernet Sauvignon located in the commune of Cauquenes, Maule Region in Chile, was estimated from digital images obtained with a commercial camera using two indirect methods: Leaf Area Gap and Brightness (LAGB and -Photogrammetric Leaf Area Quantification System (PLAQS. The latter requires deleafing of the grapevine. In a normalized difference vegetation index (NDVI map, three points of vine vigor were selected: high, medium, and low for which horizontal and vertical images were obtained. Images were filtered with the Arc View GIS 3.1 program to provide only leaf images and corresponding pixel numbers. Image area and square meters per linear meter were calculated. The best models were selected from  three linear regression adjustments: i LAI of LAGB vertical images of with LAI of PLAQS, ii LAI of PLAQS horizontal images with and, iii LAI of both types of images with PLAQS. The parameters in all models were significant. Adjustment between the LAGB and PLAQS vertical images provides greater simplicity and easy calculation since it requires only a vertical image to estimate LAI. Images thus obtained can accurately estimate LAI in this type of cultivar.En un viñedo (Vitis vinifera L. cv. Cabernet Sauvignon ubicado en la comuna de Cauquenes, Región del Maule, se estimó el índice de área foliar (LAI mediante imagen digital obtenida de una cámara fotográfica comercial, a partir de dos métodos indirectos: Espacio y Brillo Área Foliar (LAGB y Sistema Cuantificador de Área Foliar por Fotogrametría (PLAQS. Este último, requiere el deshoje de la parra. En un mapa de índice vegetativo diferencial normalizado (NDVI, se seleccionaron tres puntos de vigor de las vides: alto, medio y bajo, en cada uno de los cuales se obtuvo una imagen horizontal y vertical. Las imágenes se filtraron con el programa Arc View GIS 3.1, dejando sólo las hojas y el número de píxeles correspondientes. Se

  18. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  19. Fundus oculi pigmentation studies simulating the fs-LASIK process Fundus oculi pigmentation studies simulating the fs-LASIK process

    Science.gov (United States)

    Sander, M.; Minet, O.; Zabarylo, U.; Müller, M.; Tetz, M. R.

    2012-06-01

    The femtosecond-laser in situ keratomileusis (fs-LASIK) technique has successfully entered the refractive surgery market to correct ametropia by cutting transparent corneal tissue with ultra-short laser pulses based on photodisruption. The laser pulses in the near infrared range (NIR) generate a laser-induced breakdown (LIOB) in the cornea. By propagating through the eye, a certain amount of the pulse is deposited in the cornea and the remaining energy interacts with the strong absorbing tissue behind. Due to the absorption by the retinal pigment epithelium and the transfer of the thermal energy to surrounding tissue, the transmitted energy can induce damage to the retina. The aim of this project was to find out the threshold influences concerning the tissue and the correlation between the results of the macroscopical appraisal and the fundus oculi pigmentation by simulating the fs-LASIK procedure with two various laser systems in the continuous wave (CW) and fs-regime. Therefore ex-vivo determinations were carried out macroscopically and histopathologically on porcine tissue.

  20. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  1. Creating a panorama of the heart with digital images.

    Science.gov (United States)

    Rosebrock, L

    2000-01-01

    Digital imaging offers new opportunities still being discovered by users. This article describes a technique that was created using a digital camera to photograph the entire surface of a rat heart. The technique may have other applications as well.

  2. Automated Detection of Glaucoma From Topographic Features of the Optic Nerve Head in Color Fundus Photographs.

    Science.gov (United States)

    Chakrabarty, Lipi; Joshi, Gopal Datt; Chakravarty, Arunava; Raman, Ganesh V; Krishnadas, S R; Sivaswamy, Jayanthi

    2016-07-01

    To describe and evaluate the performance of an automated CAD system for detection of glaucoma from color fundus photographs. Color fundus photographs of 2252 eyes from 1126 subjects were collected from 2 centers: Aravind Eye Hospital, Madurai and Coimbatore, India. The images of 1926 eyes (963 subjects) were used to train an automated image analysis-based system, which was developed to provide a decision on a given fundus image. A total of 163 subjects were clinically examined by 2 ophthalmologists independently and their diagnostic decisions were recorded. The consensus decision was defined to be the clinical reference (gold standard). Fundus images of eyes with disagreement in diagnosis were excluded from the study. The fundus images of the remaining 314 eyes (157 subjects) were presented to 4 graders and their diagnostic decisions on the same were collected. The performance of the system was evaluated on the 314 images, using the reference standard. The sensitivity and specificity of the system and 4 independent graders were determined against the clinical reference standard. The system achieved an area under receiver operating characteristic curve of 0.792 with a sensitivity of 0.716 and specificity of 0.717 at a selected threshold for the detection of glaucoma. The agreement with the clinical reference standard as determined by Cohen κ is 0.45 for the proposed system. This is comparable to that of the image-based decisions of 4 ophthalmologists. An automated system was presented for glaucoma detection from color fundus photographs. The overall evaluation results indicated that the presented system was comparable in performance to glaucoma classification by a manual grader solely based on fundus image examination.

  3. Automatic localization of bifurcations and vessel crossings in digital fundus photographs using location regression

    Science.gov (United States)

    Niemeijer, Meindert; Dumitrescu, Alina V.; van Ginneken, Bram; Abrámoff, Michael D.

    2011-03-01

    Parameters extracted from the vasculature on the retina are correlated with various conditions such as diabetic retinopathy and cardiovascular diseases such as stroke. Segmentation of the vasculature on the retina has been a topic that has received much attention in the literature over the past decade. Analysis of the segmentation result, however, has only received limited attention with most works describing methods to accurately measure the width of the vessels. Analyzing the connectedness of the vascular network is an important step towards the characterization of the complete vascular tree. The retinal vascular tree, from an image interpretation point of view, originates at the optic disc and spreads out over the retina. The tree bifurcates and the vessels also cross each other. The points where this happens form the key to determining the connectedness of the complete tree. We present a supervised method to detect the bifurcations and crossing points of the vasculature of the retina. The method uses features extracted from the vasculature as well as the image in a location regression approach to find those locations of the segmented vascular tree where the bifurcation or crossing occurs (from here, POI, points of interest). We evaluate the method on the publicly available DRIVE database in which an ophthalmologist has marked the POI.

  4. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  5. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  6. Evaluating two methods of digital photography in retinopathy screening

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-02-01

    Full Text Available AIM:To evaluate the advantages of non-mydriatic fundus photography(NMFCSand mydriatic fundus photography(MFCSas eye-bottom screening and diagnosis methods in compared with gold standard fluorescein fundus angiography(FFA. METHODS: A total of 276 patients which involved in Chronic Diabetes Management Achieves within 4 streets of Pudong District Shanghai, were enrolled for diabetic retinopathy(DRexamination including NMFCS, MFCS and FFA. These DR examinations were proceeded after vision, slit-lamp and dioptroscopy tests, and reported by professionals. For those with suspicious fundus diseases, we would make appointments with specialist for further treatment. RESULTS: A total of 1104 colorful fundus images, and 1056 images(95.65%could be used to analyze. There were 408 appreciable images, 116 basically appreciable images and 28 unusable images in 552 NMFCS images. In addition, there were 432 appreciable images, 100 basically appreciable images and 20 unusable images in 552 MFCS images. There was no significant differences between NMFCS and MFCS(P>0.05. Compared with FFA with DRⅠ as the critical value, the specificity of digital photography for NMFCS was 95.71%, the sensitivity was 93.56%; however, MFCS are 95.43% and 98.02%. There was no statistically significant difference between the two screening methods(P>0.05. Compared with FFA with DRⅡ as the critical value, the specificity of digital photography for NMFCS was 95.35% and the sensitivity was 93.44%; however, for MFCS were 95.81% and 98.36%. There was no statistically significant difference between the two screening methods(P>0.05. CONCLUSION: Both NMFCS and MFCS could be used for the diagnosis and screening for eye diseases. NMFCS is easier and faster for digital photography, which is suitable for mass screening. MFCS is more likely to provide detailed information about the follow-up of the disease.

  7. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  8. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  9. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  10. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  11. [Fluorescein angiography and optical coherence tomography findings in central fundus of myopic patients].

    Science.gov (United States)

    Avetisov, S E; Budzinskaya, M V; Zhabina, O A; Andreeva, I V; Plyukhova, A A; Kobzova, M V; Musaeva, G M

    2015-01-01

    Myopia prevalence grows alike in many countries, including Russia, regardless of geographical and population conditions. to assess fundus changes in myopic patients at different ocular axial lengths by means of modern diagnostic tools. The study enrolled 97 patients (194 eyes) aged 45 ± 20.17 years with myopia of different degrees. Besides a standard ophthalmic examination, all patients underwent fundus fluorescein angiography and optical coherence tomography. The occurrence of retinal pigment epithelium (RPE) atrophy (diffuse or focal) has been shown to increase with increasing ocular axial length. Only 27 eyes (28.1%) appeared intact. As myopia progression implies axial growth of the eye, it is associated with a more severe decrease in choroid, RPE, and photoreceptor layer thicknesses: the longer the anterior-posterior axis, the thinner the above mentioned fundus structures. Age-related changes in the fundus are also likely to be more pronounced in longer axes. Myopic traction maculopathy, which in our case appeared the main cause of increased retinal thickness, was diagnosed in 105 eyes, "outer" macular retinoschisis--in 40 eyes. Thus, modern diagnostic tools, such as fluorescein angiography and optical coherence tomography, enable objective assessment of the central fundus.

  12. A Case of Fundus Oculi Albinoticus Diagnosed as Angelman Syndrome by Genetic Testing

    Directory of Open Access Journals (Sweden)

    Yurie Fukiyama

    2018-02-01

    Full Text Available Purpose: To report a case of fundus oculi albinoticus diagnosed as Angelman syndrome (AS via genetic testing. Case Report: This study reports on a 4-year-old boy. Since he had been having respiratory disturbance since birth, he underwent a complete physical examination to investigate the cause. The results indicated that he had various brain congenital abnormalities, such as a thin corpus callosum, as well as hydronephrosis, an atrial septal defect, and skin similar to patients with fundus oculi albinoticus. Examination revealed bilateral fundus oculi albinoticus, mild iridic hypopigmentation, optic atrophy, and poor visual tracking. Genetic testing revealed a deletion in the Prader-Willi syndrome/AS region on chromosome 15, and together with the results of methylation analysis, his condition was diagnosed as AS. Follow-up examinations revealed no change in the fundus oculi albinoticus and optic atrophy, nor did they indicate poor visual tracking. Conclusions: When fundus oculi albinoticus and optic atrophy are observed in patients with multiple malformations, AS should be considered as a differential diagnosis.

  13. Application of RetCamⅡ in the screening of neonatal fundus disease

    Directory of Open Access Journals (Sweden)

    Zhi-Gang Xiao

    2013-08-01

    Full Text Available AIM: To investigate the safe and reliable examination method for neonatal fundus screening.METHODS: Fundus information of 2 836 neonates performed by RetCamⅡ in our hospital from January 1, 2012 to December 31, 2012 were retrospectively analyzed, including 1 625 cases(57.30%of premature infants which were first examined 1-4 weeks after birth and 1 211 cases(42.70%of term infants which were first examined within 4 weeks after birth.RESULTS: Totally 454 cases of abnormalfundus were found, including 207 cases(12.74%of retinopathy of prematurity(ROP, ROPⅠ in 118 cases(57%, ROPⅡ in 58 cases(28.02%, ROPⅢ in 23 cases(11.11%, ROPⅣ in 8 cases(3.86%, no case of ROPV. A total of 247(20.40%term infants had abnormal fundus, of which 68 cases(27.53%were developmental or hereditary disease, retinoblastoma in 1 case(0.40%, retinal hemorrhage in 102 cases(41.30%, retinal exudative changes in 68 cases(27.53%, optic atrophy in 5 cases(2.02%and optic disc edema in 3 cases(1.21%.CONCLUSION: Neonatal fundus diseases were so various and harmful that early screening should be attended to. Premature infants and term infants with high risk are treated as focus group of fundus screening and RetCamII examination is safe and effective.

  14. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  15. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  16. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  17. Digital stereoscopic imaging

    Science.gov (United States)

    Rao, A. Ravishankar; Jaimes, Alejandro

    1999-05-01

    The convergence of inexpensive digital cameras and cheap hardware for displaying stereoscopic images has created the right conditions for the proliferation of stereoscopic imagin applications. One application, which is of growing importance to museums and cultural institutions, consists of capturing and displaying 3D images of objects at multiple orientations. In this paper, we present our stereoscopic imaging system and methodology for semi-automatically capturing multiple orientation stereo views of objects in a studio setting, and demonstrate the superiority of using a high resolution, high fidelity digital color camera for stereoscopic object photography. We show the superior performance achieved with the IBM TDI-Pro 3000 digital camera developed at IBM Research. We examine various choices related to the camera parameters, image capture geometry, and suggest a range of optimum values that work well in practice. We also examine the effect of scene composition and background selection on the quality of the stereoscopic image display. We will demonstrate our technique with turntable views of objects from the IBM Corporate Archive.

  18. Development and application of an automatic system for measuring the laser camera

    International Nuclear Information System (INIS)

    Feng Shuli; Peng Mingchen; Li Kuncheng

    2004-01-01

    Objective: To provide an automatic system for measuring imaging quality of laser camera, and to make an automatic measurement and analysis system. Methods: On the special imaging workstation (SGI 540), the procedure was written by using Matlab language. An automatic measurement and analysis system of imaging quality for laser camera was developed and made according to the imaging quality measurement standard of laser camera of International Engineer Commission (IEC). The measurement system used the theories of digital signal processing, and was based on the characteristics of digital images, as well as put the automatic measurement and analysis of laser camera into practice by the affiliated sample pictures of the laser camera. Results: All the parameters of imaging quality of laser camera, including H-D and MTF curve, low and middle and high resolution of optical density, all kinds of geometry distort, maximum and minimum density, as well as the dynamic range of gray scale, could be measured by this system. The system was applied for measuring the laser cameras in 20 hospitals in Beijing. The measuring results showed that the system could provide objective and quantitative data, and could accurately evaluate the imaging quality of laser camera, as well as correct the results made by manual measurement based on the affiliated sample pictures of the laser camera. Conclusion: The automatic measuring system of laser camera is an effective and objective tool for testing the quality of the laser camera, and the system makes a foundation for the future research

  19. A novel fully integrated handheld gamma camera

    International Nuclear Information System (INIS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-01-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  20. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  1. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  2. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  3. Fundus autofluorescence and optical coherence tomography findings in thiamine responsive megaloblastic anemia.

    Science.gov (United States)

    Ach, Thomas; Kardorff, Rüdiger; Rohrschneider, Klaus

    2015-01-01

    To report ophthalmologic fundus autofluorescence and spectral domain optical coherence tomography findings in a patient with thiamine responsive megaloblastic anemia (TRMA). A 13-year-old girl with genetically proven TRMA was ophthalmologically (visual acuity, funduscopy, perimetry, electroretinogram) followed up over >5 years. Fundus imaging also included autofluorescence and spectral domain optical coherence tomography. During a 5-year follow-up, visual acuity and visual field decreased, despite a special TRMA diet. Funduscopy revealed bull's eye appearance, whereas fundus autofluorescence showed central and peripheral hyperfluorescence and perifoveal hypofluorescence. Spectral domain optical coherence tomography revealed affected inner segment ellipsoid band and irregularities in the retinal pigment epithelium and choroidea. Autofluorescence and spectral domain optical coherence tomography findings in a patient with TRMA show retinitis pigmentosa-like retina, retinal pigment epithelium, and choroid alterations. These findings might progress even under special TRMA diet, indispensable to life. Ophthalmologist should consider TRMA in patients with deafness and ophthalmologic disorders.

  4. PATTERNS OF FUNDUS AUTOFLUORESCENCE DEFECTS IN NEOVASCULAR AGE-RELATED MACULAR DEGENERATION SUBTYPES.

    Science.gov (United States)

    Ozkok, Ahmet; Sigford, Douglas K; Tezel, Tongalp H

    2016-11-01

    To test define characteristic fundus autofluorescence patterns of different exudative age-related macular degeneration subtypes. Cross-sectional study. Fifty-two patients with choroidal neovascularization because of three different neovascular age-related macular degeneration subtypes were included in the study. Macular and peripheral fundus autofluorescence patterns of study subjects were compared in a masked fashion. Fundus autofluorescence patterns of all three neovascular age-related macular degeneration subtypes revealed similar patterns. However, peripapillary hypo-autofluorescence was more common among patients with polypoidal choroidal vasculopathy (88.2%) compared with patients with retinal angiomatous proliferation (12.5%) and patients without retinal angiomatous proliferation and polypoidal choroidal vasculopathy (21.1%) (P autofluorescence defects in neovascular age-related macular degeneration maybe suggestive of polypoidal choroidal vasculopathy as a variant of neovascular age-related macular degeneration.

  5. [Follow-up on MEWDS by fundus perimetry and multifocal ERG with the SLO].

    Science.gov (United States)

    Bültmann, S; Martin, M; Rohrschneider, K

    2002-09-01

    Most conventional techniques for examination such as perimetry or ERG may not be sensitive enough to detect functional alterations due to MEWDS precisely. We report on a follow-up performed by fundus perimetry and the new technique of multifocal ERG using the scanning laser ophthalmoscope. A 24-year-old female patient (VA 0.2/0.8) was followed up for 7 weeks with these techniques as well as Octopus perimetry, fluorescence angiography, Ganzfeld ERG and biomicroscopy. Multifocal ERG stimulation (mfERG, Retiscan) was performed with the SLO. Visual acuity improved from 0.2 to 0.8 and the central relative scotoma disappeared while a relevant increase of P1-wave amplitudes in mfERG could be observed. Combining objective measurements from the fundus controlled SLO-mfERG and results from fundus perimetry enable good correlation of morphology and results, even for minor alterations of the macula only accessible by few established clinical examinations.

  6. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  8. Digital broadcasting

    International Nuclear Information System (INIS)

    Park, Ji Hyeong

    1999-06-01

    This book contains twelve chapters, which deals with digitization of broadcast signal such as digital open, digitization of video signal and sound signal digitization of broadcasting equipment like DTPP and digital VTR, digitization of equipment to transmit such as digital STL, digital FPU and digital SNG, digitization of transmit about digital TV transmit and radio transmit, digital broadcasting system on necessity and advantage, digital broadcasting system abroad and Korea, digital broadcasting of outline, advantage of digital TV, ripple effect of digital broadcasting and consideration of digital broadcasting, ground wave digital broadcasting of DVB-T in Europe DTV in U.S.A and ISDB-T in Japan, HDTV broadcasting, satellite broadcasting, digital TV broadcasting in Korea, digital radio broadcasting and new broadcasting service.

  9. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    Science.gov (United States)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  10. Multi‐angular observations of vegetation indices from UAV cameras

    DEFF Research Database (Denmark)

    Sobejano-Paz, Veronica; Wang, Sheng; Jakobsen, Jakob

    Unmanned aerial vehicles (UAVs) are found as an alternative to the classical manned aerial photogrammetry, which can be used to obtain environmental data or as a complementary solution to other methods (Nex and Remondino, 2014). Although UAVs have coverage limitations, they have better resolution...... (Berni et al., 2009), hyper spectral camera (Burkart et al., 2015) and photometric elevation mapping sensor (Shahbazi et al., 2015) among others. Therefore, UAVs can be used in many fields such as agriculture, forestry, archeology, architecture, environment and traffic monitoring (Nex and Remondino, 2014......). In this study, the UAV used is a hexacopter s900 equipped with a Global Positioning System (GPS) and two cameras; a digital RGB photo camera and a multispectral camera (MCA), with a resolution of 5472 x 3648 pixels and 1280 x 1024 pixels, respectively. In terms of applications, traditional methods using...

  11. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image

    Science.gov (United States)

    Safitri, Diah Wahyu; Juniati, Dwi

    2017-08-01

    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  12. Agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula in the Reykjavik eye study.

    Science.gov (United States)

    Csutak, A; Lengyel, I; Jonasson, F; Leung, I; Geirsdottir, A; Xing, W; Peto, T

    2010-10-01

    To establish the agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula. In 2008, the 12-year follow-up was conducted on 573 participants of the Reykjavik Eye Study. This study included the use of the Optos P200C AF ultra wide-angle laser scanning ophthalmoscope alongside Zeiss FF 450 conventional digital fundus camera on 121 eyes with or without age-related macular degeneration using the International Classification System. Of these eyes, detailed grading was carried out on five cases each with hard drusen, geographic atrophy and chorioretinal neovascularisation, and six cases of soft drusen. Exact agreement and κ-statistics were calculated. Comparison of the conventional and ultra wide-angle images in the macula showed an overall 96.43% agreement (κ=0.93) with no disagreement at end-stage disease; although in one eye chorioretinal neovascularisation was graded as drusenoid pigment epithelial detachment. Of patients with drusen only, the exact agreement was 96.1%. The detailed grading showed no clinically significant disagreement between the conventional 45° and 200° images. On the basis of our results, there is a good agreement between grading conventional and ultra wide-angle images in the macula.

  13. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  14. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  15. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  16. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  17. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  18. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  19. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  20. Digital subtraction angiography

    International Nuclear Information System (INIS)

    Neuwirth, J. Jr.; Bohutova, J.

    1987-01-01

    The quality of radiodiagnostic methods to a great extent depends on the quality of the resulting image. The basic technical principles are summed up of the different parts of digital subtraction angiography apparatus and of methods of improving the image. The instrument is based on a videochain consisting of an X-ray tube, an intensifier of the radiographic image, optical parts, a video camera, an analog-to-digital converter and a computer. The main advantage of the digitally processed image is the possibility of optimizing the image into a form which will contain the biggest amount of diagnostically valuable information. Described are the mathematical operations for improving the digital image: spatial filtration, pixel shift, time filtration, image integration, time interval differentation and matched filtering. (M.D.). 8 refs., 3 figs

  1. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  2. Differential gene expression in the murine gastric fundus lacking interstitial cells of Cajal

    Directory of Open Access Journals (Sweden)

    Ward Sean M

    2003-06-01

    Full Text Available Abstract Background The muscle layers of murine gastric fundus have no interstitial cells of Cajal at the level of the myenteric plexus and only possess intramuscular interstitial cells and this tissue does not generate electric slow waves. The absence of intramuscular interstitial cells in W/WV mutants provides a unique opportunity to study the molecular changes that are associated with the loss of these intercalating cells. Method The gene expression profile of the gastric fundus of wild type and W/WV mice was assayed by murine microarray analysis displaying a total of 8734 elements. Queried genes from the microarray analysis were confirmed by semi-quantitative reverse transcription-polymerase chain reaction. Results Twenty-one genes were differentially expressed in wild type and W/WV mice. Eleven transcripts had 2.0–2.5 fold higher mRNA expression in W/WV gastric fundus when compared to wild type tissues. Ten transcripts had 2.1–3.9 fold lower expression in W/WV mutants in comparison with wild type animals. None of these genes have ever been implicated in any bowel motility function. Conclusions These data provides evidence that several important genes have significantly changed in the murine fundus of W/WV mutants that lack intramuscular interstitial cells of Cajal and have reduced enteric motor neurotransmission.

  3. Hypertensive retinopathy identification through retinal fundus image using backpropagation neural network

    Science.gov (United States)

    Syahputra, M. F.; Amalia, C.; Rahmat, R. F.; Abdullah, D.; Napitupulu, D.; Setiawan, M. I.; Albra, W.; Nurdin; Andayani, U.

    2018-03-01

    Hypertension or high blood pressure can cause damage of blood vessels in the retina of eye called hypertensive retinopathy (HR). In the event Hypertension, it will cause swelling blood vessels and a decrese in retina performance. To detect HR in patients body, it is usually performed through physical examination of opthalmoscope which is still conducted manually by an ophthalmologist. Certainly, in such a manual manner, takes a ong time for a doctor to detetct HR on aa patient based on retina fundus iamge. To overcome ths problem, a method is needed to identify the image of retinal fundus automatically. In this research, backpropagation neural network was used as a method for retinal fundus identification. The steps performed prior to identification were pre-processing (green channel, contrast limited adapative histogram qualization (CLAHE), morphological close, background exclusion, thresholding and connected component analysis), feature extraction using zoning. The results show that the proposed method is able to identify retinal fundus with an accuracy of 95% with maximum epoch of 1500.

  4. Correlation between Spectral Optical Coherence Tomography and Fundus Autofluorescence at the margins of Geographic Atrophy

    Science.gov (United States)

    Brar, Manpreet; Kozak, Igor; Cheng, Lingyun; Bartsch, Dirk-Uwe G.; Yuson, Ritchie; Nigam, Nitin; Oster, Stephen F.; Mojana, Francesca; Freeman, William R.

    2009-01-01

    Purpose We studied the appearance of margins of Geographic atrophy in high- resolution optical coherence tomography (OCT) images and correlate those changes with fundus autofluorescence imaging. Design Retrospective observational case study. Methods Patients with geographic atrophy secondary to dry age related macular degeneration (ARMD) were assessed by means of Spectral Domain OCT (Spectralis HRA/OCT; Heidelberg Engineering, Heidelberg, Germany or OTI, Inc, Toronto, Canada) as well as Autofluoresence Imaging (HRA or Spectralis Heidelberg Engineering, Heidelberg, Germany): The outer retinal layer alterations were analyzed in the junctional zone between normal retina and atrophic retina, and correlated with corresponding fundus autofluorescence. Results 23 eyes of 16 patients aged between 62 years to 96 years were examined. There was a significant association between OCT findings and the fundus autofluorescence findings(r=0.67, pautofluorescence; Smooth margins on OCT correspond significantly to normal fundus autofluorescence. (Kappa-0.7348, pautofluorescence; secondary to increased lipofuscin may together serve as determinants of progression of geographic atrophy. PMID:19541290

  5. Fundus autofluorescence and optical coherence tomographic findings in acute zonal occult outer retinopathy.

    Science.gov (United States)

    Fujiwara, Takamitsu; Imamura, Yutaka; Giovinazzo, Vincent J; Spaide, Richard F

    2010-09-01

    The purpose of this study was to investigate the fundus autofluorescence and optical coherence tomography findings in eyes with acute zonal occult outer retinopathy (AZOOR). A retrospective observational case series of the fundus autofluorescence and spectral domain optical coherence tomography in a series of patients with AZOOR. There were 19 eyes of 11 patients (10 women), who had a mean age of 49.1 +/- 13.9 years. Fundus autofluorescence abnormalities were seen in 17 of the 19 eyes, were more common in the peripapillary area, and were smaller in extent than the optical coherence tomography abnormalities. Nine eyes showed progression of hypoautofluorescence area during the mean follow-up of 69.7 months. The mean thickness of the photoreceptor layer at fovea was 177 microm in eyes with AZOOR, which was significantly thinner than controls (193 microm, P = 0.049). Abnormal retinal laminations were found in 12 eyes and were located over areas of loss of the photoreceptors. The subfoveal choroidal thickness was 243 microm, which is normal. Fundus autofluorescence abnormalities in AZOOR showed distinct patterns of retinal pigment epithelial involvement, which may be progressive. Thinning of photoreceptor cell layer with loss of the outer segments and abnormal inner retinal lamination in the context of a normal choroid are commonly found in AZOOR.

  6. [Fundus autofluorescence in patients with inherited retinal diseases : Patterns of fluorescence at two different wavelengths.

    NARCIS (Netherlands)

    Theelen, T.; Boon, C.J.F.; Klevering, B.J.; Hoyng, C.B.

    2008-01-01

    BACKGROUND: Fundus autofluorescence (FAF) may be excited and measured at different wavelengths. In the present study we compared short wavelength and near-infrared FAF patterns of retinal dystrophies. METHODS: We analysed both eyes of 108 patients with diverse retinal dystrophies. Besides colour

  7. Fundus autofluorescence and optical coherence tomography of congenital grouped albinotic spots.

    Science.gov (United States)

    Kim, David Y; Hwang, John C; Moore, Anthony T; Bird, Alan C; Tsang, Stephen H

    2010-09-01

    The purpose of this study was to describe the findings of fundus autofluores-cence (FAF) and optical coherence tomography in a series of patients with congenital grouped albinotic spots. Three eyes of three patients with congenital grouped albinotic spots were evaluated with FAF and optical coherence tomography imaging to evaluate the nature of the albinotic spots. In all three eyes with congenital grouped albinotic spots, FAF imaging showed autofluorescent spots corresponding to the albinotic spots seen on stereo biomicroscopy. One eye also had additional spots detected on FAF imaging that were not visible on stereo biomicroscopy or color fundus photographs. Fundus autofluorescence imaging of the spots showed decreased general autofluorescence and decreased peripheral autofluorescence surrounding central areas of retained or increased autofluorescence. Optical coherence tomography showed a disruption in signal from the hyperreflective layer corresponding to the inner and outer segment junction and increased signal backscattering from the choroid in the area of the spots. Fluorescein angiography showed early and stable hyperfluorescence of the spots without leakage. In this case series, FAF showed decreased autofluorescence of the spots consistent with focal retinal pigment epithelium atrophy or abnormal material blocking normal autofluorescence and areas of increased autofluorescence suggesting retinal pigment epithelium dysfunction. The findings of optical coherence tomography and fluorescein angiography suggest photoreceptor and retinal pigment epithelium layer abnormalities. Fundus autofluorescence and optical coherence tomography are useful noninvasive diagnostic adjuncts that can aid in the diagnosis of congenital grouped albinotic spots, help determine extent of disease, and contribute to our understanding of its pathophysiology.

  8. Automatic Drusen Quantification and Risk Assessment of Age-related Macular Degeneration on Color Fundus Images

    NARCIS (Netherlands)

    Grinsven, M.J.J.P. van; Lechanteur, Y.T.E.; Ven, J.P.H. van de; Ginneken, B. van; Hoyng, C.B.; Theelen, T.; Sanchez, C.I.

    2013-01-01

    PURPOSE: To evaluate a machine learning algorithm that allows for computer aided diagnosis (CAD) of non-advanced age-related macular degeneration (AMD) by providing an accurate detection and quantification of drusen location, area and size. METHODS: Color fundus photographs of 407 eyes without AMD

  9. Ocular Fundus Photography as a Tool to Study Stroke and Dementia.

    Science.gov (United States)

    Cheung, Carol Y; Chen, Christopher; Wong, Tien Y

    2015-10-01

    Although cerebral small vessel disease has been linked to stroke and dementia, due to limitations of current neuroimaging technology, direct in vivo visualization of changes in the cerebral small vessels (e.g., cerebral arteriolar narrowing, tortuous microvessels, blood-brain barrier damage, capillary microaneurysms) is difficult to achieve. As the retina and the brain share similar embryological origin, anatomical features, and physiologic properties with the cerebral small vessels, the retinal vessels offer a unique and easily accessible "window" to study the correlates and consequences of cerebral small vessel diseases in vivo. The retinal microvasculature can be visualized, quantified and monitored noninvasively using ocular fundus photography. Recent clinic- and population-based studies have demonstrated a close link between retinal vascular changes seen on fundus photography and stroke and dementia, suggesting that ocular fundus photography may provide insights to the contribution of microvascular disease to stroke and dementia. In this review, we summarize current knowledge on retinal vascular changes, such as retinopathy and changes in retinal vascular measures with stroke and dementia as well as subclinical makers of cerebral small vessel disease, and discuss the possible clinical implications of these findings in neurology. Studying pathologic changes of retinal blood vessels may be useful for understanding the etiology of various cerebrovascular conditions; hence, ocular fundus photography can be potentially translated into clinical practice. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  10. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  11. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  12. Realistic camera noise modeling with application to improved HDR synthesis

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  13. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  14. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  15. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  16. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  17. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  18. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  19. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  20. Sensitivity and specificity of automated analysis of single-field non-mydriatic fundus photographs by Bosch DR Algorithm-Comparison with mydriatic fundus photography (ETDRS for screening in undiagnosed diabetic retinopathy.

    Directory of Open Access Journals (Sweden)

    Pritam Bawankar

    Full Text Available Diabetic retinopathy (DR is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9% could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes. The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.

  1. Subretinal Fibrosis in Stargardt’s Disease with Fundus Flavimaculatus and ABCA4 Gene Mutation

    Directory of Open Access Journals (Sweden)

    Settimio Rossi

    2012-12-01

    Full Text Available Purpose: To report on 4 patients affected by Stargardt’s disease (STGD with fundus flavimaculatus (FFM and ABCA4 gene mutation associated with subretinal fibrosis. Methods: Four patients with a diagnosis of STGD were clinically examined. All 4 cases underwent a full ophthalmologic evaluation, including best-corrected visual acuity measured by the Snellen visual chart, biomicroscopic examination, fundus examination, fundus photography, electroretinogram, microperimetry, optical coherence tomography and fundus autofluorescence. All patients were subsequently screened for ABCA4 gene mutations, identified by microarray genotyping and confirmed by conventional DNA sequencing of the relevant exons. Results: In all 4 patients, ophthalmologic exam showed areas of subretinal fibrosis in different retinal sectors. In only 1 case, these lesions were correlated to an ocular trauma as confirmed by biomicroscopic examination of the anterior segment that showed a nuclear cataract dislocated to the superior site and vitreous opacities along the lens capsule. The other patients reported a lifestyle characterized by competitive sport activities. The performed instrumental diagnostic investigations confirmed the diagnosis of STGD with FFM in all patients. Moreover, in all 4 affected individuals, mutations in the ABCA4 gene were found. Conclusions: Patients with the diagnosis of STGD associated with FFM can show atypical fundus findings. We report on 4 patients affected by STGD with ABCA4 gene mutation associated with subretinal fibrosis. Our findings suggest that this phenomenon can be accelerated by ocular trauma and also by ocular microtrauma caused by sport activities, highlighting that lifestyle can play a role in the onset of these lesions.

  2. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  3. ACCURACY ASSESSMENT OF GO PRO HERO 3 (BLACK CAMERA IN UNDERWATER ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    P. Helmholz,

    2016-06-01

    Full Text Available Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras have become available, which often cost less than $500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black. The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm and 0.0072 mm for 12MB (for an average c of 3.642mm. The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  4. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    Science.gov (United States)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  5. The MARS Photon Processing Cameras for Spectral CT

    CERN Document Server

    Doesburg, Robert Michael Nicholas; Butler, APH; Renaud, PF

    This thesis is about the development of the MARS camera: a stan- dalone portable digital x-ray camera with spectral sensitivity. It is built for use in the MARS Spectral system from the Medipix2 and Medipix3 imaging chips. Photon counting detectors and Spectral CT are introduced, and Medipix is identified as a powerful new imaging device. The goals and strategy for the MARS camera are discussed. The Medipix chip physical, electronic and functional aspects, and ex- perience gained, are described. The camera hardware, firmware and supporting PC software are presented. Reports of experimental work on the process of equalisation from noise, and of tests of charge sum- ming mode, conclude the main body of the thesis. The camera has been actively used since late 2009 in pre-clinical re- search. A list of publications that derive from the use of the camera and the MARS Spectral scanner demonstrates the practical benefits already obtained from this work. Two of the publications are first- author, eight are co-authore...

  6. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  7. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  8. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  9. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  10. DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis.

    Science.gov (United States)

    Joshi, Gopal Datt; Sivaswamy, Jayanthi

    2011-01-01

    Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large

  11. Digital photography: communication, identity, memory

    NARCIS (Netherlands)

    van Dijck, J.

    2008-01-01

    Taking photographs seems no longer primarily an act of memory intended to safeguard a family's pictorial heritage, but is increasingly becoming a tool for an individual's identity formation and communication. Digital cameras, cameraphones, photoblogs and other multipurpose devices are used to

  12. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    Science.gov (United States)

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  13. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    Science.gov (United States)

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  14. NUKAB system use with the PICKER DYNA CAMERA II

    International Nuclear Information System (INIS)

    Collet, H.; Faurous, P.; Lehn, A.; Suquet, P.

    Present-day data processing units connected to scintillation gamma cameras can make use of cabled programme or recorded programme systems. The NUKAB system calls on the latter technique. The central element of the data processing unit, connected to the PICKER DYNA CAMERA II output, consists of a DIGITAL PDP 8E computer with 12-bit technological words. The use of a 12-bit technological format restricts the possibilities of digitalisation, 64x64 images representing the practical limit. However the NUKAB system appears well suited to the processing of data from gamma cameras at present in service. The addition of output terminals of the tracing panel type should widen the possibilities of the system. It seems that the 64x64 format is not a handicap in view of the resolution power of the detectors [fr

  15. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  16. Demonstration of the CDMA-mode CAOS smart camera.

    Science.gov (United States)

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  17. The significance of fluorescein angiography in the early diagnosis of lesions on ocular fundus at pseudoxanthoma elasticum patients.

    Science.gov (United States)

    Bogdanowski, T; Gluza, J; Rasiewicz, D

    1977-05-27

    The role of fluorescein angiography in early diagnosis of lesions on fundus of the eye at pseudoxanthoma elasticum patients has been shown. The authors show angiographic changes on the basis of three cases of pseudoxanthoma elasticum patients.

  18. EARLY SIMULTANEOUS FUNDUS AUTOFLUORESCENCE AND OPTICAL COHERENCE TOMOGRAPHY FEATURES AFTER PARS PLANA VITRECTOMY FOR PRIMARY RHEGMATOGENOUS RETINAL DETACHMENT

    NARCIS (Netherlands)

    Dellʼomo, Roberto; Mura, Marco; Lesnik Oberstein, Sarit Y.; Bijl, Heico; Tan, H. Stevie

    2012-01-01

    Purpose: To describe fundus autofluorescence and optical coherence tomography (OCT) features of the macula after pars plana vitrectomy for rhegmatogenous retinal detachment. Methods: Thirty-three eyes of 33 consecutive patients with repaired rhegmatogenous retinal detachment with or without the

  19. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report

    Energy Technology Data Exchange (ETDEWEB)

    Erol, Muhammet Kazim; Coban, Deniz Turgut; Ceran, Basak Bostanci; Bulut, Mehmet, E-mail: muhammetkazimerol@gmail.com [Kazim Erol. Antalya Training and Research Hospital, Ophthalmology Department, Antalya (Turkey)

    2013-11-01

    The authors present enhanced depth imaging optical coherence tomography (EDI OCT) and fundus autofluorescence (FAF) characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma. (author)

  20. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report

    International Nuclear Information System (INIS)

    Erol, Muhammet Kazim; Coban, Deniz Turgut; Ceran, Basak Bostanci; Bulut, Mehmet

    2013-01-01

    The authors present enhanced depth imaging optical coherence tomography (EDI OCT) and fundus autofluorescence (FAF) characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma. (author)

  1. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report

    Directory of Open Access Journals (Sweden)

    Muhammet Kazim Erol

    2013-06-01

    Full Text Available The authors present enhanced depth imaging optical coherence tomography (EDI OCT and fundus autofluorescence (FAF characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma.

  2. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  3. Solar-Powered Airplane with Cameras and WLAN

    Science.gov (United States)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  4. The development of high-speed 100 fps CCD camera

    International Nuclear Information System (INIS)

    Hoffberg, M.; Laird, R.; Lenkzsus, F.; Liu, C.; Rodricks, B.

    1997-01-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512 x 512 pixel CCD as its sensor, which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergo correlated double sampling after which it is digitized into 12 bits. The throughput of the system translates into 60 MB/second, which is either stored directly in a PC or transferred to a custom-designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for X-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed X-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from 1 to 15 MHz. The noise was measured to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and X-ray photons. (orig.)

  5. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  6. Applications of iQID cameras

    Science.gov (United States)

    Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2017-09-01

    iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.

  7. Trans-palpebral illumination: an approach for wide-angle fundus photography without the need for pupil dilation

    OpenAIRE

    Toslak, Devrim; Thapa, Damber; Chen, Yanjun; Erol, Muhammet Kazim; Paul Chan, R. V.; Yao, Xincheng

    2016-01-01

    It is technically difficult to construct wide-angle fundus imaging devices due to the complexity of conventional transpupillary illumination and imaging mechanisms. We report here a new method, i.e., trans-palpebral illumination, for wide-angle fundus photography without the need for pupil dilation. By constructing a smartphone-based prototype imaging device, we demonstrated a 152° view in a single-shot image. The unique combination of low-cost smartphone design and automatic illumination opt...

  8. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  9. Fundus autofluorescence imaging in dry AMD: 2014 Jules Gonin lecture of the Retina Research Foundation.

    Science.gov (United States)

    Holz, Frank G; Steinberg, Julia S; Göbel, Arno; Fleckenstein, Monika; Schmitz-Valckenberg, Steffen

    2015-01-01

    Fundus autofluorescence (FAF) imaging allows for topographic mapping of intrisnic fluorophores in the retinal pigment epithelial cell monolayer, as well as mapping of other fluorophores that may occur with disease in the outer retina and the sub-neurosensory space. FAF imaging provides information not obtainable with other imaging modalities. Near-infrared fundus autofluorescence images can also be obtained in vivo, and may be largely melanin-derived. FAF imaging has been shown to be useful in a wide spectrum of macular and retinal diseases. The scope of applications now includes identification of diseased RPE in macular/retinal diseases, elucidating pathophysiological mechanisms, identification of early disease stages, refined phenotyping, identification of prognostic markers for disease progression, monitoring disease progression in the context of both natural history and interventional therapeutic studies, and objective assessment of luteal pigment distribution and density as well as RPE melanin distribution. Here, we review the use of FAF imaging in various phenotypic manifestations of dry AMD.

  10. Segmentasi Pembuluh Darah Retina Pada Citra Fundus Menggunakan Gradient Based Adaptive Thresholding Dan Region Growing

    Directory of Open Access Journals (Sweden)

    Deni Sutaji

    2016-07-01

    Full Text Available AbstrakSegmentasi pembuluh darah pada citra fundus retina menjadi hal yang substansial dalam dunia kedokteran, karena dapat digunakan untuk mendeteksi penyakit, seperti: diabetic retinopathy, hypertension, dan cardiovascular. Dokter membutuhkan waktu sekitar dua jam untuk mendeteksi pembuluh darah retina, sehingga diperlukan metode yang dapat membantu screening agar lebih cepat.Penelitian sebelumnya mampu melakukan segmentasi pembuluh darah yang sensitif terhadap variasi ukuran lebar pembuluh darah namun masih terjadi over-segmentasi pada area patologi. Oleh karena itu, penelitian ini bertujuan untuk mengembangkan metode segmentasi pembuluh darah pada citra fundus retina yang dapat mengurangi over-segmentasi pada area patologi menggunakan Gradient Based Adaptive Thresholding dan Region Growing.Metode yang diusulkan terdiri dari 3 tahap, yaitu segmentasi pembuluh darah utama, deteksi area patologi dan segmentasi pembuluh darah tipis. Tahap segmentasi pembuluh darah utama menggunakan high-pass filtering dan tophat reconstruction pada kanal hijau citra yang sudah diperbaiki kontrasnya sehingga lebih jelas perbedaan antara pembuluh darah dan background. Tahap deteksi area patologi menggunakan metode Gradient Based Adaptive Thresholding. Tahap segmentasi pembuluh darah tipis menggunakan Region Growing berdasarkan informasi label pembuluh darah utama dan label area patologi. Hasil segmentasi pembuluh darah utama dan pembuluh darah tipis kemudian digabungkan sehingga menjadi keluaran sistem berupa citra biner pembuluh darah. Berdasarkan hasil uji coba, metode ini mampu melakukan segmentasi pembuluh darah retina dengan baik pada citra fundus DRIVE, yaitu dengan akurasi rata-rata 95.25% dan nilai Area Under Curve (AUC pada kurva Relative Operating Characteristic (ROC sebesar 74.28%.                           Kata Kunci: citra fundus retina, gradient based adaptive thresholding, patologi, pembuluh darah retina, region growing

  11. CMOS Image Sensors: Electronic Camera On A Chip

    Science.gov (United States)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  12. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  13. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  14. Hubble Space Telescope, Faint Object Camera

    Science.gov (United States)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  15. Vertically Integrated Edgeless Photon Imaging Camera

    Energy Technology Data Exchange (ETDEWEB)

    Fahim, Farah [Fermilab; Deptuch, Grzegorz [Fermilab; Shenai, Alpana [Fermilab; Maj, Piotr [AGH-UST, Cracow; Kmon, Piotr [AGH-UST, Cracow; Grybos, Pawel [AGH-UST, Cracow; Szczygiel, Robert [AGH-UST, Cracow; Siddons, D. Peter [Brookhaven; Rumaiz, Abdul [Brookhaven; Kuczewski, Anthony [Brookhaven; Mead, Joseph [Brookhaven; Bradford, Rebecca [Argonne; Weizeorick, John [Argonne

    2017-01-01

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a large area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.

  16. Automatic Segmentation of Optic Disc in Eye Fundus Images: A Survey

    OpenAIRE

    Allam, Ali; Youssif, Aliaa; Ghalwash, Atef

    2015-01-01

    Optic disc detection and segmentation is one of the key elements for automatic retinal disease screening systems. The aim of this survey paper is to review, categorize and compare the optic disc detection algorithms and methodologies, giving a description of each of them, highlighting their key points and performance measures. Accordingly, this survey firstly overviews the anatomy of the eye fundus showing its main structural components along with their properties and functions. Consequently,...

  17. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Fundus autofluorescence in the diagnosis and monitoring of acute retinal necrosis

    OpenAIRE

    Ward, Tyson SJ; Reddy, Ashvini K

    2015-01-01

    Background Acute retinal necrosis (ARN), a vision threatening viral retinitis, is often diagnosed and treated based on clinical findings. These clinical features have been well characterized by various imaging modalities, but not using fundus autofluorescence (FAF), a noninvasive method of evaluating the neurosensory retina and retinal pigment epithelium (RPE) based on the detection of endogenous fluorophores. Findings A patient diagnosed with ARN was followed over a 10-month period to identi...

  19. Fundus auto fluorescence and spectral domain ocular coherence tomography in the early detection of chloroquine retinopathy

    OpenAIRE

    Megan B. Goodman; Ari Ziskind

    2015-01-01

    Purpose: To determine the sensitivity of spectral domain ocular coherence tomography (SD-OCT) and fundus auto fluorescence (FAF) images as a screening test to detect early changes in the retina prior to the onset of chloroquine retinopathy. Method: The study was conducted using patients taking chloroquine (CQ), referred by the Rheumatology Department to the Ophthalmology Department at Tygerberg Academic Hospital. Group A consisted of 59 patients on CQ for less than 5 years, and Group B co...

  20. Application of 3-Dimensional Printing Technology to Construct an Eye Model for Fundus Viewing Study

    OpenAIRE

    Xie, Ping; Hu, Zizhong; Zhang, Xiaojun; Li, Xinhua; Gao, Zhishan; Yuan, Dongqing; Liu, Qinghuai

    2014-01-01

    Objective To construct a life-sized eye model using the three-dimensional (3D) printing technology for fundus viewing study of the viewing system. Methods We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs). Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical ...