WorldWideScience

Sample records for nonmydriatic fundus camera

  1. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  2. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Science.gov (United States)

    Shen, Bailey Y.

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  3. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    Science.gov (United States)

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  4. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  5. Diabetic Retinopathy Screening Ratio Is Improved When Using a Digital, Nonmydriatic Fundus Camera Onsite in a Diabetes Outpatient Clinic

    Directory of Open Access Journals (Sweden)

    Pia Roser

    2016-01-01

    Full Text Available Objective. To evaluate the effect of onsite screening with a nonmydriatic, digital fundus camera for diabetic retinopathy (DR at a diabetes outpatient clinic. Research Design and Methods. This cross-sectional study included 502 patients, 112 with type 1 and 390 with type 2 diabetes. Patients attended screenings for microvascular complications, including diabetic nephropathy (DN, diabetic polyneuropathy (DP, and DR. Single-field retinal imaging with a digital, nonmydriatic fundus camera was used to assess DR. Prevalence and incidence of microvascular complications were analyzed and the ratio of newly diagnosed to preexisting complications for all entities was calculated in order to differentiate natural progress from missed DRs. Results. For both types of diabetes, prevalence of DR was 25.0% (n=126 and incidence 6.4% (n=32 (T1DM versus T2DM: prevalence: 35.7% versus 22.1%, incidence 5.4% versus 6.7%. 25.4% of all DRs were newly diagnosed. Furthermore, the ratio of newly diagnosed to preexisting DR was higher than those for DN (p=0.12 and DP (p=0.03 representing at least 13 patients with missed DR. Conclusions. The results indicate that implementing nonmydriatic, digital fundus imaging in a diabetes outpatient clinic can contribute to improved early diagnosis of diabetic retinopathy.

  6. Fundus Autofluorescence Captured With a Nonmydriatic Retinal Camera in Vegetarians Versus Nonvegetarians.

    Science.gov (United States)

    Kommana, Sumana S; Padgaonkar, Pooja; Mendez, Nicole; Wu, Lesley; Szirth, Bernard; Khouri, Albert S

    2015-09-09

    A baseline level of lipofuscin in the retinal pigment epithelium (RPE) is inevitable with age, but increased levels due to increased oxidative stress can result in deleterious vision loss at older ages. As earlier detection of differences in levels can lead to superior preventative management, we studied the relationship between lipofuscin accumulation and dietary lifestyle (vegetarian vs. nonvegetarian) in the younger, healthy South Asian population using retinal fundus autofluorescence (FAF) imaging. In this pilot study, we examined 37 healthy subjects (average age 23 years ± 1) all undergoing similar stress levels as medical students at Rutgers New Jersey Medical School. Levels of lipofuscin concentrations were imaged using a FAF retinal camera (Canon CX-1). Two images (color and FAF) were captured of the left eye and included in the analysis. FAF quantitative scoring was measured in 2 regions of the captured image, the papillo-macular region (P) and the macula (M), by determining the grayscale score of a 35.5 mm(2) rectangle in the respective regions. Standardized scores (corrected to remove baseline fluorescence) were then obtained. Means, standard deviations, and t tests were performed for comparisons. Fundus autofluorescence scores of regions P and M were significantly different (P vegetarians had statistically significant lower levels of autofluorescence. These findings can have potential implications regarding long-term retinal health and risk for developing certain diseases over decades in subjects at risk for vision-threatening diseases. © 2015 Diabetes Technology Society.

  7. The Nonmydriatic Fundus Camera in Diabetic Retinopathy Screening: A Cost-Effective Study with Evaluation for Future Large-Scale Application

    Science.gov (United States)

    Scarpa, Giuseppe; Urban, Francesca; Tessarin, Michele; Gallo, Giovanni; Midena, Edoardo

    2016-01-01

    Aims. The study aimed to present the experience of a screening programme for early detection of diabetic retinopathy (DR) using a nonmydriatic fundus camera, evaluating the feasibility in terms of validity, resources absorption, and future advantages of a potential application, in an Italian local health authority. Methods. Diabetic patients living in the town of Ponzano, Veneto Region (Northern Italy), were invited to be enrolled in the screening programme. The “no prevention strategy” with the inclusion of the estimation of blindness related costs was compared with screening costs in order to evaluate a future extensive and feasible implementation of the procedure, through a budget impact approach. Results. Out of 498 diabetic patients eligible, 80% was enrolled in the screening programme. 115 patients (34%) were referred to an ophthalmologist and 9 cases required prompt treatment for either proliferative DR or macular edema. Based on the pilot data, it emerged that an extensive use of the investigated screening programme, within the Greater Treviso area, could prevent 6 cases of blindness every year, resulting in a saving of €271,543.32 (−13.71%). Conclusions. Fundus images obtained with a nonmydriatic fundus camera could be considered an effective, cost-sparing, and feasible screening tool for the early detection of DR, preventing blindness as a result of diabetes. PMID:27885337

  8. Feasibility and quality of nonmydriatic fundus photography in children

    Science.gov (United States)

    Toffoli, Daniela; Bruce, Beau B.; Lamirel, Cédric; Henderson, Amanda D.; Newman, Nancy J.; Biousse, Valérie

    2011-01-01

    Purpose Ocular funduscopic examination is difficult in young children and is rarely attempted by nonophthalmologists. Our objective was to determine the feasibility of reliably obtaining high-quality nonmydriatic fundus photographs in children. Methods Nonmydriatic fundus photographs were obtained in both eyes of children seen in a pediatric ophthalmology clinic. Ease of fundus photography was recorded on a 10-point Likert scale (10 = very easy). Quality was graded from 1 to 5 (1, inadequate for any diagnostic purpose; 2, unable to exclude all emergent findings; 3, only able to exclude emergent findings; 4, not ideal, but still able to exclude subtle findings; and 5, ideal quality). The primary outcome measure was image quality by age. Results A total of 878 photographs of 212 children (median age, 6 years; range,1-18 years) were included. Photographs of at least one eye were obtained in 190 children (89.6%) and in both eyes in 181 (85.3%). Median rating for ease of photography was 7. Photographs of some clinical value (grade ≥2) were obtained in 33% of children 3 years. High-quality photographs (grade 4 or 5) were obtained in both eyes in 7% of children <3 years, 57% of children ≥3 to <7 years, 85% of children ≥7 to <9 years, and 65% of children ≥9 years. The youngest patient with high-quality photographs in both eyes was 22 months. Conclusions Nonmydriatic fundus photographs of adequate quality can be obtained in children over age 3 and in some children as young as 22 months. PMID:22153402

  9. Semi-automated retinal vessel analysis in nonmydriatic fundus photography.

    Science.gov (United States)

    Schuster, Alexander Karl-Georg; Fischer, Joachim Ernst; Vossmerbaeumer, Urs

    2014-02-01

    Funduscopic assessment of the retinal vessels may be used to assess the health status of microcirculation and as a component in the evaluation of cardiovascular risk factors. Typically, the evaluation is restricted to morphological appreciation without strict quantification. Our purpose was to develop and validate a software tool for semi-automated quantitative analysis of retinal vasculature in nonmydriatic fundus photography. matlab software was used to develop a semi-automated image recognition and analysis tool for the determination of the arterial-venous (A/V) ratio in the central vessel equivalent on 45° digital fundus photographs. Validity and reproducibility of the results were ascertained using nonmydriatic photographs of 50 eyes from 25 subjects recorded from a 3DOCT device (Topcon Corp.). Two hundred and thirty-three eyes of 121 healthy subjects were evaluated to define normative values. A software tool was developed using image thresholds for vessel recognition and vessel width calculation in a semi-automated three-step procedure: vessel recognition on the photograph and artery/vein designation, width measurement and calculation of central retinal vessel equivalents. Mean vessel recognition rate was 78%, vessel class designation rate 75% and reproducibility between 0.78 and 0.91. Mean A/V ratio was 0.84. Application on a healthy norm cohort showed high congruence with prior published manual methods. Processing time per image was one minute. Quantitative geometrical assessment of the retinal vasculature may be performed in a semi-automated manner using dedicated software tools. Yielding reproducible numerical data within a short time leap, this may contribute additional value to mere morphological estimates in the clinical evaluation of fundus photographs. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  10. Feasibility of Non-Mydriatic Ocular Fundus Photography in the Emergency Department: Phase I of the FOTO-ED Study

    Science.gov (United States)

    Bruce, Beau B.; Lamirel, Cédric; Biousse, Valérie; Ward, Antionette; Heilpern, Katherine L.; Newman, Nancy J.; Wright, David W.

    2011-01-01

    Objectives Examination of the ocular fundus is imperative in many acute medical and neurologic conditions, but direct ophthalmoscopy by non-ophthalmologists is underutilized, poorly performed, and difficult without pharmacologic pupillary dilation. The objective was to examine the feasibility of non-mydriatic fundus photography as a clinical alternative to direct ophthalmoscopy by emergency physicians (EPs). Methods Adult patients presenting to the emergency department (ED) with headache, acute focal neurologic deficit, diastolic blood pressure ≥ 120 mmHg, or acute visual change had ocular fundus photographs taken by nurse practitioners using a non-mydriatic fundus camera. Photographs were reviewed by a neuro-ophthalmologist within 24 hours for findings relevant to acute ED patient care. Nurse practitioners and patients rated ease, comfort, and speed of non-mydriatic fundus photography on a 10-point Likert scale (10 best). Timing of visit and photography were recorded by automated electronic systems. Results Three hundred fifty patients were enrolled. There were 1,734 photographs taken during 230 nurse practitioner shifts. Eighty-three percent of the 350 patients had at least one eye with a high quality photograph, while only 3% of patients had no photographs of diagnostic value. Mean ratings were ≥ 8.7 (standard deviation [SD] ≤ 1.9) for all measures. The median photography session lasted 1.9 minutes (interquartile range [IQR] 1.3 to 2.9 minutes), typically accounting for less that 0.5% of the patient’s total ED visit. Conclusions Non-mydriatic fundus photography taken by nurse practitioners is a feasible alternative to direct ophthalmoscopy in the ED. It is performed well by non-physician staff, is well-received by staff and patients, and requires a trivial amount of time to perform. PMID:21906202

  11. Feasibility of nonmydriatic ocular fundus photography in the emergency department: Phase I of the FOTO-ED study.

    Science.gov (United States)

    Bruce, Beau B; Lamirel, Cédric; Biousse, Valérie; Ward, Antionette; Heilpern, Katherine L; Newman, Nancy J; Wright, David W

    2011-09-01

    Examination of the ocular fundus is imperative in many acute medical and neurologic conditions, but direct ophthalmoscopy by nonophthalmologists is underutilized, poorly performed, and difficult without pharmacologic pupillary dilation. The objective was to examine the feasibility of nonmydriatic fundus photography as a clinical alternative to direct ophthalmoscopy by emergency physicians (EPs). Adult patients presenting to the emergency department (ED) with headache, acute focal neurologic deficit, diastolic blood pressure ≥ 120 mm Hg, or acute visual change had ocular fundus photographs taken by nurse practitioners using a nonmydriatic fundus camera. Photographs were reviewed by a neuroophthalmologist within 24 hours for findings relevant to acute ED patient care. Nurse practitioners and patients rated ease, comfort, and speed of nonmydriatic fundus photography on a 10-point Likert scale (10 best). Timing of visit and photography were recorded by automated electronic systems. A total of 350 patients were enrolled. There were 1,734 photographs taken during 230 nurse practitioner shifts. Eighty-three percent of the 350 patients had at least one eye with a high-quality photograph, while only 3% of patients had no photographs of diagnostic value. Mean ratings were ≥ 8.7 (standard deviation [SD] ≤ 1.9) for all measures. The median photography session lasted 1.9 minutes (interquartile range [IQR] = 1.3 to 2.9 minutes), typically accounting for less that 0.5% of the patient's total ED visit. Nonmydriatic fundus photography taken by nurse practitioners is a feasible alternative to direct ophthalmoscopy in the ED. It is performed well by nonphysician staff, is well-received by staff and patients, and requires a trivial amount of time to perform. © 2011 by the Society for Academic Emergency Medicine.

  12. Nonmydriatic Ocular Fundus Photography in the Emergency Department: How It Can Benefit Neurologists.

    Science.gov (United States)

    Bruce, Beau B

    2015-10-01

    Examination of the ocular fundus is a critical aspect of the neurologic examination. For example, in patients with headache the ocular fundus examination is needed to uncover "red flags" suggestive of secondary etiologies. However, ocular fundus examination is infrequently and poorly performed in clinical practice. Nonmydriatic ocular fundus photography provides an alternative to direct ophthalmoscopy that has been studied as part of the Fundus Photography versus Ophthalmoscopy Trial Outcomes in the Emergency Department (FOTO-ED) Study. Herein, the results of the FOTO-ED study are reviewed with a particular focus on the study's implications for the acute care of patients presenting with headache and focal neurologic deficits. In headache patients, not only optic disc edema and optic disc pallor were observed as would be expected, but also a large number of abnormalities associated with hypertension. Based upon subjects with focal neurologic deficits, the FOTO-ED study suggests that the ocular fundus examination may assist with the triage of patients presenting with suspected transient ischemic attack. Continued advances in the ease and portability of nonmydriatic fundus photography will hopefully help to restore ocular fundus examination as a routinely performed component of all neurologic examinations.

  13. Diagnostic accuracy and use of nonmydriatic ocular fundus photography by emergency physicians: phase II of the FOTO-ED study.

    Science.gov (United States)

    Bruce, Beau B; Thulasi, Praneetha; Fraser, Clare L; Keadey, Matthew T; Ward, Antoinette; Heilpern, Katherine L; Wright, David W; Newman, Nancy J; Biousse, Valérie

    2013-07-01

    During the first phase of the Fundus Photography vs Ophthalmoscopy Trial Outcomes in the Emergency Department study, 13% (44/350; 95% confidence interval [CI] 9% to 17%) of patients had an ocular fundus finding, such as papilledema, relevant to their emergency department (ED) management found by nonmydriatic ocular fundus photography reviewed by neuro-ophthalmologists. All of these findings were missed by emergency physicians, who examined only 14% of enrolled patients by direct ophthalmoscopy. In the present study, we evaluate the sensitivity of nonmydriatic ocular fundus photography, an alternative to direct ophthalmoscopy, for relevant findings when photographs are made available for use by emergency physicians during routine clinical care. Three hundred fifty-four patients presenting to our ED with headache, focal neurologic deficit, visual change, or diastolic blood pressure greater than or equal to 120 mm Hg had nonmydriatic fundus photography obtained (Kowa nonmydriatic α-D). Photographs were placed on the electronic medical record for emergency physician review. Identification of relevant findings on photographs by emergency physicians was compared with a reference standard of neuro-ophthalmologist review. Emergency physicians reviewed photographs of 239 patients (68%). Thirty-five patients (10%; 95% CI 7% to 13%) had relevant findings identified by neuro-ophthalmologist review (6 disc edema, 6 grade III/IV hypertensive retinopathy, 7 isolated hemorrhages, 15 optic disc pallor, and 1 retinal vascular occlusion). Emergency physicians identified 16 of 35 relevant findings (sensitivity 46%; 95% CI 29% to 63%) and also identified 289 of 319 normal findings (specificity 91%; 95% CI 87% to 94%). Emergency physicians reported that photographs were helpful for 125 patients (35%). Emergency physicians used nonmydriatic fundus photographs more frequently than they performed direct ophthalmoscopy, and their detection of relevant abnormalities improved. Ocular fundus

  14. Ophthalmoscopy versus non-mydriatic fundus photography in the ...

    African Journals Online (AJOL)

    1990-09-01

    Sep 1, 1990 ... detection of diabetic retinopathy before and after dilatation of th~ pU~i1s in black diabetics was .... patient's retina, the camera also provides excellent material for student education. We are indebted to Mr R. Taylor for taking ...

  15. Morphometric Optic Nerve Head Analysis in Glaucoma Patients: A Comparison between the Simultaneous Nonmydriatic Stereoscopic Fundus Camera (Kowa Nonmyd WX3D and the Heidelberg Scanning Laser Ophthalmoscope (HRT III

    Directory of Open Access Journals (Sweden)

    Siegfried Mariacher

    2016-01-01

    Full Text Available Purpose. To investigate the agreement between morphometric optic nerve head parameters assessed with the confocal laser ophthalmoscope HRT III and the stereoscopic fundus camera Kowa nonmyd WX3D retrospectively. Methods. Morphometric optic nerve head parameters of 40 eyes of 40 patients with primary open angle glaucoma were analyzed regarding their vertical cup-to-disc-ratio (CDR. Vertical CDR, disc area, cup volume, rim volume, and maximum cup depth were assessed with both devices by one examiner. Mean bias and limits of agreement (95% CI were obtained using scatter plots and Bland-Altman analysis. Results. Overall vertical CDR comparison between HRT III and Kowa nonmyd WX3D measurements showed a mean difference (limits of agreement of −0.06 (−0.36 to 0.24. For the CDR < 0.5 group (n=24 mean difference in vertical CDR was −0.14 (−0.34 to 0.06 and for the CDR ≥ 0.5 group (n=16 0.06 (−0.21 to 0.34. Conclusion. This study showed a good agreement between Kowa nonmyd WX3D and HRT III with regard to widely used optic nerve head parameters in patients with glaucomatous optic neuropathy. However, data from Kowa nonmyd WX3D exhibited the tendency to measure larger CDR values than HRT III in the group with CDR < 0.5 group and lower CDR values in the group with CDR ≥ 0.5.

  16. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  17. Nonmydriatic ultra-wide-field scanning laser ophthalmoscopy (Optomap) versus two-field fundus photography in diabetic retinopathy.

    Science.gov (United States)

    Liegl, Raffael; Liegl, Kristine; Ceklic, Lala; Haritoglou, Christos; Kampik, Anselm; Ulbig, Michael W; Kernt, Marcus; Neubauer, Aljoscha S

    2014-01-01

    The purpose of this study was to investigate the diagnostic properties of a 2-laser wavelength nonmydriatic 200° ultra-wide-field scanning laser ophthalmoscope (SLO) versus mydriatic 2-field 45° color fundus photography (EURODIAB standard) for assessing diabetic retinopathy (DR). A total of 143 consecutive eyes of patients with different levels of DR were graded regarding DR level and macular edema based on 2-field color photographs or 1 Optomap Panoramic 200 SLO image. All SLO images were nonmydriatic and all photographs mydriatic. Grading was performed masked to patient and clinical data. Based on photography, 20 eyes had no DR, 44 had mild, 18 moderate and 42 severe nonproliferative DR, and 19 eyes had proliferative DR. Overall correlation for grading DR level compared to Optomap SLO was moderate with kappa 0.54 (p photography need to be confirmed in further studies.

  18. Do it yourself smartphone fundus camera – DIYretCAM

    Directory of Open Access Journals (Sweden)

    Biju Raju

    2016-01-01

    Full Text Available This article describes the method to make a do it yourself smartphone-based fundus camera which can image the central retina as well as the peripheral retina up to the pars plana. It is a cost-effective alternative to the fundus camera.

  19. Comparison of non-mydriatic retinal photography with ophthalmoscopy in 2159 patients: mobile retinal camera study.

    Science.gov (United States)

    Taylor, R; Lovelock, L; Tunbridge, W M; Alberti, K G; Brackenridge, R G; Stephenson, P; Young, E

    1990-01-01

    OBJECTIVE--To determine whether non-mydriatic Polaroid retinal photography was comparable to ophthalmoscopy with mydriasis in routine clinic screening for early, treatable diabetic retinopathy. DESIGN--Prospective study of ophthalmoscopic findings according to retinal camera screening and ophthalmoscopy and outcome of referral to ophthalmologist. SETTING--Outpatient diabetic clinics of three teaching hospitals and three district general hospitals. PATIENTS--2159 Adults selected randomly from the diabetic clinics, excluding only those registered as blind or those in wheelchairs and unable to enter the screening vehicle. MAIN OUTCOME MEASURES--Numbers of patients and eyes correctly identified by each technique as requiring referral with potentially treatable retinopathy (new vessel formation and maculopathy) and congruence in numbers of microaneurysms, haemorrhages, and exudates reported. RESULTS--Camera screening missed two cases of new vessel formation and did not identify a further 12 but indicated a need for referral. Ophthalmoscopy missed five cases of new vessel formation and indicated a need for referral in another four for other reasons. Maculopathy was reported in 147 eyes with camera screening alone and 95 eyes by ophthalmoscopy only (chi 2 = 11.2; p less than 0.001), in 66 and 29 of which respectively maculopathy was subsequently confirmed. Overall, 38 eyes received laser treatment for maculopathy after detection by camera screening compared with 17 after ophthalmoscopic detection (chi 2 = 8.0; p less than 0.01). Camera screening underestimated numbers of microaneurysms (chi 2 = 12.9; p less than 0.001) and haemorrhages (chi 2 = 7.4; p less than 0.01) and ophthalmoscopy underestimated hard exudates (chi 2 = 48.2; p less than 0.001). CONCLUSIONS--Non-mydriatic Polaroid retinal photography is at least as good as ophthalmoscopy with mydriasis in routine diabetic clinics in identifying new vessel formation and absence of retinopathy and is significantly better

  20. Evaluation of retinal illumination in coaxial fundus camera

    Science.gov (United States)

    de Oliveira, André O.; de Matos, Luciana; Castro Neto, Jarbas C.

    2016-09-01

    Retinal images are obtained by simultaneously illuminating and imaging the retina, which is achieved using a fundus camera. This device meets low light illumination of the fundus with high resolution and reflection free images. Although the current equipment presents a sophisticated solution, it is complex to align due to the high number of off-axis components. In this work, we substitute the complex illumination system by a ring of LEDs mounted coaxially to the imaging optical system, positioning it in the place of the holed mirror of the traditional optical design. We evaluated the impact of this substitution regarding to image quality (measured through the modulation transfer function) and illumination uniformity produced by this system on the retina. The results showed there is no change in image quality and no problem was detected concerning uniformity compared to the traditional equipment. Consequently, we avoided off-axis components, easing the alignment of the equipment without reducing both image quality and illumination uniformity.

  1. Bayer Filter Snapshot Hyperspectral Fundus Camera for Human Retinal Imaging.

    Science.gov (United States)

    Kaluzny, Joel; Li, Hao; Liu, Wenzhong; Nesper, Peter; Park, Justin; Zhang, Hao F; Fawzi, Amani A

    2017-04-01

    To demonstrate the versatility and performance of a compact Bayer filter snapshot hyperspectral fundus camera for in-vivo clinical applications including retinal oximetry and macular pigment optical density measurements. 12 healthy volunteers were recruited under an Institutional Review Board (IRB) approved protocol. Fundus images were taken with a custom hyperspectral camera with a spectral range of 460-630 nm. We determined retinal vascular oxygen saturation (sO2) for the healthy population using the captured spectra by least squares curve fitting. Additionally, macular pigment optical density was localized and visualized using multispectral reflectometry from selected wavelengths. We successfully determined the mean sO2 of arteries and veins of each subject (ages 21-80) with excellent intrasubject repeatability (1.4% standard deviation). The mean arterial sO2 for all subjects was 90.9% ± 2.5%, whereas the mean venous sO2 for all subjects was 64.5% ± 3.5%. The mean artery-vein (A-V) difference in sO2 varied between 20.5% and 31.9%. In addition, we were able to reveal and quantify macular pigment optical density. We demonstrated a single imaging tool capable of oxygen saturation and macular pigment density measurements in vivo. The unique combination of broad spectral range, high spectral-spatial resolution, rapid and robust imaging capability, and compact design make this system a valuable tool for multifunction spectral imaging that can be easily performed in a clinic setting.

  2. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    Science.gov (United States)

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  3. The investigation of chromatic aberration correction for digital eye fundus images

    OpenAIRE

    Jakstys, V.; Marcinkevicius, V.; Treigys, P.

    2016-01-01

    This paper focuses on the lateral chromatic aberration correction in images captured with Optomed SmartScope M5 camera. This portable non-mydriatic eye fundus orbital camera does not have chromatic lenses. When photo camera system is designed without chromatic lenses, it is necessary to apply image processing algorithms for lateral chromatic aberration effect correction. These algorithms try to scale the fringed colour channels so that all channels spatially overlap each other ...

  4. Realization of the ergonomics design and automatic control of the fundus cameras

    Science.gov (United States)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  5. Murine fundus fluorescein angiography: An alternative approach using a handheld camera.

    Science.gov (United States)

    Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra

    2016-07-01

    In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration.

  6. Telemedicine for diabetic retinopathy screening using an ultra-widefield fundus camera

    Directory of Open Access Journals (Sweden)

    Hussain N

    2017-08-01

    Full Text Available Nazimul Hussain,1 Maryam Edraki,2 Rima Tahhan,2 Nishanth Sanalkumar,2 Sami Kenz,2 Nagwa Khalil Akasha,2 Brian Mtemererwa,2 Nahed Mohammed2 1Department of Ophthalmology, Al Zahra Hospital, Sharjah, United Arab Emirates; 2Department of Endocrinology, Al Zahra Hospital, Sharjah, United Arab Emirates Objective: Telemedicine reporting of diabetic retinopathy (DR screening using ultra-widefield (UWF fundus camera. Materials and methods: Cross-sectional study of diabetic patients who visited the endocrinology department of a private multi-specialty hospital in United Arab Emirates between April 2015 and January 2017 who underwent UWF fundus imaging. Fundus pictures are then accessed at the Retina Clinic in the Department of Ophthalmology. Primary outcome measure was incidence of any form of DR detected. The secondary outcome measure was failure to take good image and inability to grade. Results: A total of 1,024 diabetic individuals were screened for DR from April 2015 to January 2017 in the department of Endocrinology. Rate of DR was 9.27%; 165 eyes of 95 individuals were diagnosed to have some form of DR. Mild non-proliferative DR (NPDR was seen in 114 of 165 eyes (69.09%, moderate NPDR in 32 eyes (19.39%, severe NPDR in six eyes (3.64%, and proliferative DR (PDR in 13 eyes (7.88%. The secondary outcome measure of poor image acquisition was seen in one individual who had an image acquired in one eye that could not be graded due to bad picture quality. Conclusions: The present study has shown the effectiveness of DR screening using UWF fundus camera. It has shown the effectiveness of trained nursing personnel taking fundus images. This model can be replicated in any private multi-specialty hospital and reduce the burden of DR screening in the retina clinic and enhance early detection of treatable DR. Keywords: telemedicine, ultra-widefield camera, diabetic retinopathy screening

  7. Smartphone-based fundus camera device (MII Ret Cam) and technique with ability to image peripheral retina.

    Science.gov (United States)

    Sharma, Ashish; Subramaniam, Saranya Devi; Ramachandran, K I; Lakshmikanthan, Chinnasamy; Krishna, Soujanya; Sundaramoorthy, Selva K

    2016-01-01

    To demonstrate an inexpensive smartphone-based fundus camera device (MII Ret Cam) and technique with ability to capture peripheral retinal pictures. A fundus camera was designed in the form of a device that has slots to fit a smartphone (built-in camera and flash) and 20-D lens. With the help of the device and an innovative imaging technique, high-quality fundus videos were taken with easy extraction of images. The MII Ret Cam and innovative imaging technique was able to capture high-quality images of peripheral retina such as ora serrata and pars plana apart from central fundus pictures. Our smartphone-based fundus camera can help clinicians to monitor diseases affecting both central and peripheral retina. It can help patients understand their disease and clinicians convincing their patients regarding need of treatment especially in cases of peripheral lesions. Imaging peripheral retina has not been demonstrated in existing smartphone-based fundus imaging techniques. The device can also be an inexpensive tool for mass screening.

  8. Screening for diabetic retinopathy: the utility of nonmydriatic retinal photography in Egyptian adults.

    Science.gov (United States)

    Penman, A D; Saaddine, J B; Hegazy, M; Sous, E S; Ali, M A; Brechner, R J; Herman, W H; Engelgau, M M; Klein, R

    1998-09-01

    Although regular screening for diabetic retinopathy with ophthalmoscopy or retinal photography is widely recommended in the United States and Europe, few reports of its use in developing countries are available. We compared the performance of screening by retinal photography with that of indirect ophthalmoscopy by using data from a population-based survey of diabetes and its complications in Egypt. During that project, 427 persons with diabetes underwent an eye examination and fundus photography with a non-mydriatic camera through a dilated pupil. Data from the examinations of the right eye of each patient are presented. Ninety-two (22%) of the 427 retinal photographs were ungradable; in 58 eyes (63%), this was due to media opacity (42 eyes with cataract, 3 with corneal opacity, and 13 with both). Agreement between retinal photography and indirect ophthalmoscopy was poor (kappa = 0.33; 95% CI = 0.27-0.39) and primarily due to the large number of eyes (n = 79) with ungradable photographs that could be graded by ophthalmoscopy. None of these eyes was judged by ophthalmoscopy to have sight-threatening retinopathy. Fifty-four photographs were diagnosed with greater retinopathy than found on ophthalmoscopy. Retinal photography with the nonmydriatic camera through a dilated pupil is a useful method to screen for diabetic retinopathy in most adults in Egypt. However, such screening strategies have limited use in older persons and in persons with corneal disease or cataract.

  9. An evaluation of fundus photography and fundus autofluorescence in the diagnosis of cuticular drusen

    DEFF Research Database (Denmark)

    Høeg, Tracy B; Moldow, Birgitte; Klein, Ronald

    2016-01-01

    PURPOSE: To examine non-mydriatic fundus photography (FP) and fundus autofluorescence (FAF) as alternative non-invasive imaging modalities to fluorescein angiography (FA) in the detection of cuticular drusen (CD). METHODS: Among 2953 adults from the Danish Rural Eye Study (DRES) with gradable FP...

  10. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  11. Fundus autofluorescence and colour fundus imaging compared during telemedicine screening in patients with diabetes.

    Science.gov (United States)

    Kolomeyer, Anton M; Baumrind, Benjamin R; Szirth, Bernard C; Shahid, Khadija; Khouri, Albert S

    2013-06-01

    We investigated the use of fundus autofluorescence (FAF) imaging in screening the eyes of patients with diabetes. Images were obtained from 50 patients with type 2 diabetes undergoing telemedicine screening with colour fundus imaging. The colour and FAF images were obtained with a 15.1 megapixel non-mydriatic retinal camera. Colour and FAF images were compared for pathology seen in nonproliferative and proliferative diabetic retinopathy (NPDR and PDR, respectively). A qualitative assessment was made of the ease of detecting early retinopathy changes and the extent of existing retinopathy. The mean age of the patients was 47 years, most were male (82%) and most were African American (68%). Their mean visual acuity was 20/45 and their mean intraocular pressure was 14.3 mm Hg. Thirty-eight eyes (76%) did not show any diabetic retinopathy changes on colour or FAF imaging. Seven patients (14%) met the criteria for NPDR and five (10%) for severe NPDR or PDR. The most common findings were microaneurysms, hard exudates and intra-retinal haemorrhages (IRH) (n = 6 for each). IRH, microaneurysms and chorioretinal scars were more easily visible on FAF images. Hard exudates, pre-retinal haemorrhage and fibrosis, macular oedema and Hollenhorst plaque were easier to identify on colour photographs. The value of FAF imaging as a complementary technique to colour fundus imaging in detecting diabetic retinopathy during ocular screening warrants further investigation.

  12. Determining degree of optic nerve edema from color fundus photography

    Science.gov (United States)

    Agne, Jason; Wang, Jui-Kai; Kardon, Randy H.; Garvin, Mona K.

    2015-03-01

    Swelling of the optic nerve head (ONH) is subjectively assessed by clinicians using the Frisén scale. It is believed that a direct measurement of the ONH volume would serve as a better representation of the swelling. However, a direct measurement requires optic nerve imaging with spectral domain optical coherence tomography (SD-OCT) and 3D segmentation of the resulting images, which is not always available during clinical evaluation. Furthermore, telemedical imaging of the eye at remote locations is more feasible with non-mydriatic fundus cameras which are less costly than OCT imagers. Therefore, there is a critical need to develop a more quantitative analysis of optic nerve swelling on a continuous scale, similar to SD-OCT. Here, we select features from more commonly available 2D fundus images and use them to predict ONH volume. Twenty-six features were extracted from each of 48 color fundus images. The features include attributes of the blood vessels, optic nerve head, and peripapillary retina areas. These features were used in a regression analysis to predict ONH volume, as computed by a segmentation of the SD-OCT image. The results of the regression analysis yielded a mean square error of 2.43 mm3 and a correlation coefficient between computed and predicted volumes of R = 0:771, which suggests that ONH volume may be predicted from fundus features alone.

  13. Ocular Fundus Photography as an Educational Tool.

    Science.gov (United States)

    Mackay, Devin D; Garza, Philip S

    2015-10-01

    The proficiency of nonophthalmologists with direct ophthalmoscopy is poor, which has prompted a search for alternative technologies to examine the ocular fundus. Although ocular fundus photography has existed for decades, its use has been traditionally restricted to ophthalmology clinical care settings and textbooks. Recent research has shown a role for nonmydriatic fundus photography in nonophthalmic settings, encouraging more widespread adoption of fundus photography technology. Recent studies have also affirmed the role of fundus photography as an adjunct or alternative to direct ophthalmoscopy in undergraduate medical education. In this review, the authors examine the use of ocular fundus photography as an educational tool and suggest future applications for this important technology. Novel applications of fundus photography as an educational tool have the potential to resurrect the dying art of funduscopy.

  14. An image based auto-focusing algorithm for digital fundus photography.

    Science.gov (United States)

    Moscaritolo, Michele; Jampel, Henry; Knezevich, Frederick; Zeimer, Ran

    2009-11-01

    In fundus photography, the task of fine focusing the image is demanding and lack of focus is quite often the cause of suboptimal photographs. The introduction of digital cameras has provided an opportunity to automate the task of focusing. We have developed a software algorithm capable of identifying best focus. The auto-focus (AF) method is based on an algorithm we developed to assess the sharpness of an image. The AF algorithm was tested in the prototype of a semi-automated nonmydriatic fundus camera designed to screen in the primary care environment for major eye diseases. A series of images was acquired in volunteers while focusing the camera on the fundus. The image with the best focus was determined by the AF algorithm and compared to the assessment of two masked readers. A set of fundus images was obtained in 26 eyes of 20 normal subjects and 42 eyes of 28 glaucoma patients. The 95% limits of agreement between the readers and the AF algorithm were -2.56 to 2.93 and -3.7 to 3.84 diopter and the bias was 0.09 and 0.71 diopter, for the two readers respectively. On average, the readers agreed with the AF algorithm on the best correction within less than 3/4 diopter. The intraobserver repeatability was 0.94 and 1.87 diopter, for the two readers respectively, indicating that the limit of agreement with the AF algorithm was determined predominantly by the repeatability of each reader. An auto-focus algorithm for digital fundus photography can identify the best focus reliably and objectively. It may improve the quality of fundus images by easing the task of the photographer.

  15. Detailed Morphological Changes of Foveoschisis in Patient with X-Linked Retinoschisis Detected by SD-OCT and Adaptive Optics Fundus Camera

    Directory of Open Access Journals (Sweden)

    Keiichiro Akeo

    2015-01-01

    Full Text Available Purpose. To report the morphological and functional changes associated with a regression of foveoschisis in a patient with X-linked retinoschisis (XLRS. Methods. A 42-year-old man with XLRS underwent genetic analysis and detailed ophthalmic examinations. Functional assessments included best-corrected visual acuity (BCVA, full-field electroretinograms (ERGs, and multifocal ERGs (mfERGs. Morphological assessments included fundus photography, spectral-domain optical coherence tomography (SD-OCT, and adaptive optics (AO fundus imaging. After the baseline clinical data were obtained, topical dorzolamide was applied to the patient. The patient was followed for 24 months. Results. A reported RS1 gene mutation was found (P203L in the patient. At the baseline, his decimal BCVA was 0.15 in the right and 0.3 in the left eye. Fundus photographs showed bilateral spoke wheel-appearing maculopathy. SD-OCT confirmed the foveoschisis in the left eye. The AO images of the left eye showed spoke wheel retinal folds, and the folds were thinner than those in fundus photographs. During the follow-up period, the foveal thickness in the SD-OCT images and the number of retinal folds in the AO images were reduced. Conclusions. We have presented the detailed morphological changes of foveoschisis in a patient with XLRS detected by SD-OCT and AO fundus camera. However, the findings do not indicate whether the changes were influenced by topical dorzolamide or the natural history.

  16. Compact Laser Doppler Flowmeter (LDF Fundus Camera for the Assessment of Retinal Blood Perfusion in Small Animals.

    Directory of Open Access Journals (Sweden)

    Marielle Mentek

    Full Text Available Noninvasive techniques for ocular blood perfusion assessment are of crucial importance for exploring microvascular alterations related to systemic and ocular diseases. However, few techniques adapted to rodents are available and most are invasive or not specifically focused on the optic nerve head (ONH, choroid or retinal circulation. Here we present the results obtained with a new rodent-adapted compact fundus camera based on laser Doppler flowmetry (LDF.A confocal miniature flowmeter was fixed to a specially designed 3D rotating mechanical arm and adjusted on a rodent stereotaxic table in order to accurately point the laser beam at the retinal region of interest. The linearity of the LDF measurements was assessed using a rotating Teflon wheel and a flow of microspheres in a glass capillary. In vivo reproducibility was assessed in Wistar rats with repeated measurements (inter-session and inter-day of retinal arteries and ONH blood velocity in six and ten rats, respectively. These parameters were also recorded during an acute intraocular pressure increase to 150 mmHg and after heart arrest (n = 5 rats.The perfusion measurements showed perfect linearity between LDF velocity and Teflon wheel or microsphere speed. Intraclass correlation coefficients for retinal arteries and ONH velocity (0.82 and 0.86, respectively indicated strong inter-session repeatability and stability. Inter-day reproducibility was good (0.79 and 0.7, respectively. Upon ocular blood flow cessation, the retinal artery velocity signal substantially decreased, whereas the ONH signal did not significantly vary, suggesting that it could mostly be attributed to tissue light scattering.We have demonstrated that, while not adapted for ONH blood perfusion assessment, this device allows pertinent, stable and repeatable measurements of retinal blood perfusion in rats.

  17. Fundus Autofluorescence Imaging in an Ocular Screening Program

    Directory of Open Access Journals (Sweden)

    A. M. Kolomeyer

    2012-01-01

    Full Text Available Purpose. To describe integration of fundus autofluorescence (FAF imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA, intraocular pressure (IOP, and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18% individuals with diabetic retinopathy or macular edema (n=4, focal RPE defects (n=2, age-related macular degeneration (n=1, central serous retinopathy (n=1, and ocular trauma (n=1. Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  18. Fundus autofluorescence imaging in an ocular screening program.

    Science.gov (United States)

    Kolomeyer, A M; Nayak, N V; Szirth, B C; Khouri, A S

    2012-01-01

    Purpose. To describe integration of fundus autofluorescence (FAF) imaging into an ocular screening program. Methods. Fifty consecutive screening participants were included in this prospective pilot imaging study. Color and FAF (530/640 nm exciter/barrier filters) images were obtained with a 15.1MP Canon nonmydriatic hybrid camera. A clinician evaluated the images on site to determine need for referral. Visual acuity (VA), intraocular pressure (IOP), and ocular pathology detected by color fundus and FAF imaging modalities were recorded. Results. Mean ± SD age was 47.4 ± 17.3 years. Fifty-two percent were female and 58% African American. Twenty-seven percent had a comprehensive ocular examination within the past year. Mean VA was 20/39 in the right eye and 20/40 in the left eye. Mean IOP was 15 mmHg bilaterally. Positive color and/or FAF findings were identified in nine (18%) individuals with diabetic retinopathy or macular edema (n = 4), focal RPE defects (n = 2), age-related macular degeneration (n = 1), central serous retinopathy (n = 1), and ocular trauma (n = 1). Conclusions. FAF was successfully integrated in our ocular screening program and aided in the identification of ocular pathology. Larger studies examining the utility of this technology in screening programs may be warranted.

  19. 应用免散瞳眼底照相技术筛查糖尿病视网膜病变临床分析%Clinical analysis of the application of non-mydriatic fundus photography for diabetic retinopathy screening

    Institute of Scientific and Technical Information of China (English)

    夏伟; 王利; 李蓬秋; 张学军; 杨艳; 杨毅

    2013-01-01

    medical history were recorded.Serum levels of fasting plasma glucose ( FPG ) , lipids, glycated hemoglobin (HbA1c) and uric acid (UA) were measured. RESULTS: Totally 317 out of 768 DM patients (41.3%) were diagnosed with DR.The detection rate in women was significantly higher than that in men ( 45.2% vs 37.6%, P<0.05).Compared with NDR group, DR group had older age, longer course and higher systolic blood pressure (SBP), FPG, Triglycerides (TG), HbA1c and UA level ( P <0.05 ). Dinary logistic regression analysis showed that duration, gender, SBP and HbA1c were independent risk factors of DR in DM patients. CONCLUSION:DR in DM patients is quite common and closely associated with duration, gender, blood pressure and glucose. Non-mydriatic fundus photography is a quite useful method for screening DR.

  20. Design of optical system for catadioptric fundus camera%折反式眼底相机光学系统设计

    Institute of Scientific and Technical Information of China (English)

    李灿; 宋淑梅; 刘英; 李淳; 李小虎; 孙强

    2012-01-01

    To eliminate the scatter light and central ghost in a classical fundus camera, the optical system of a catadioptric fundus camera with a field of view 40?and a working distance of 48 mm was designed. An off-axial reflecting ophthalmic lens with free form surfaces was designed to correct the off-axial aberrations. Two free-form-surfaces were introduced in the imaging objective system to correct the residual off-axial aberrations of the reflective ophthalmic lens. In the optimization of the imaging system, an eye model with varying defocuses was proposed to eliminate the negative effect of eye aberrations as well as to accommodate the eye with different refractive errors. Three adjacent illumination rings were introduced in the illumination part to avoid the undesirable light reflected by the eye optical system. Experiments show that the accommodation range of the system is between ?0 m"1 and 10 nrT1 , the resolution at the object plane is 33 lp/mm across the entire field of view,and the maximumdistortion is less than 8. 5%. Furthermore, the illumination non-uniformity is less than 15% in the conditions without scatter light and central ghost. The designed catadioptric fundus camera with a free-form-surfaces show a satisfactory large field of view and a large working distance, and removes the undesirable scatter light and central ghost greatly.%为控制传统眼底相机的杂光和鬼像,设计了一款40°视场、48 mm工作距离的折反式眼底相机光学系统.设计了离轴反射式网膜物镜,引入了自由曲面以校正其离轴像差,成像物镜中采用两个自由曲面对网膜物镜的剩余像差进行校正.建立了一种离焦眼模型,用于优化成像光路,消除人眼像差对成像的影响,同时得到不同视度缺陷眼的成像光路.照明光路中使用3个相邻的环形光阑,减少了眼球光学系统反射的杂光.成像光学系统可在-10~+10 m-1调焦,物方各视场分辨率为33 lp/mm,系统畸变小于8.5%;

  1. The fundus slit lamp.

    Science.gov (United States)

    Gellrich, Marcus-Matthias

    2015-01-01

    Fundus biomicroscopy with the slit lamp as it is practiced widely nowadays was not established until the 1980-es with the introduction of the Volk lenses +90 and +60D. Thereafter little progress has been made in retinal imaging with the slit lamp. It is the aim of this paper to fully exploit the potential of a video slit lamp for fundus documentation by using easily accessible additions. Suitable still images are easily retrieved from videorecordings of slit lamp examinations. The effects of changements in the slit lamp itself (slit beam and apertures) and its examination equipment (converging lenses from +40 to +90D) on quality and spectrum of fundus images are demonstrated. Imaging software is applied for reconstruction of larger fundus areas in a mosaic pattern (Hugin®) and to perform the flicker test in order to visualize changes in the same fundus area at different points of time (Power Point®). The three lenses +90/+60/+40D are a good choice for imaging the whole spectrum of retinal diseases. Displacement of the oblique slit light can be used to assess changes in the surface profile of the inner retina which occurs e.g. in macular holes or pigment epithelial detachment. The mosaic function in its easiest form (one strip macula adapted to one strip with the optic disc) provides an overview of the posterior pole comparable to a fundus camera's image. A reconstruction of larger fundus areas is feasible for imaging in vitreoretinal surgery or occlusive vessel disease. The flicker test is a fine tool for monitoring progressive glaucoma by changes in the optic disc, and it is also a valuable diagnostic tool in macular disease. Nearly all retinal diseases can be imaged with the slit lamp - irrespective whether they affect the posterior pole, mainly the optic nerve or the macula, the whole retina or only its periphery. Even a basic fundus controlled perimetry is possible. Therefore fundus videography with the slit lamp is a worthwhile approach especially for the

  2. Fundus imaging with a mobile phone: A review of techniques

    Directory of Open Access Journals (Sweden)

    Mahesh P Shanmugam

    2014-01-01

    Full Text Available Fundus imaging with a fundus camera is an essential part of ophthalmic practice. A mobile phone with its in-built camera and flash can be used to obtain fundus images of reasonable quality. The mobile phone can be used as an indirect ophthalmoscope when coupled with a condensing lens. It can be used as a direct ophthalmoscope after minimal modification, wherein the fundus can be viewed without an intervening lens in young patients with dilated pupils. Employing the ubiquitous mobile phone to obtain fundus images has the potential for mass screening, enables ophthalmologists without a fundus camera to document and share findings, is a tool for telemedicine and is rather inexpensive.

  3. [Heterotopic fundus (author's transl)].

    Science.gov (United States)

    Denden, A

    1976-07-01

    Fundus heterotopicus is the term used to describe a rare, non-hereditary curvature anomaly of the fundus in the non-myopic eye, which is characterized: 1. functionally, by a slowly increasing myopic-astigmatic refractive error, 2. by correctable bitemporal or binasal refractionscomata and 3. ophthalmoscopically by a posterior out-pouching of the nasal or temporal fundus portions, and including the optic disc and macula in the obliquely descending wall of the extasis.

  4. Fundus Photography in the 21st Century--A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare.

    Science.gov (United States)

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han; Agrawal, Rupesh

    2016-03-01

    The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available.

  5. Fundus Camera in the Application of the High Blood Pressure and Arteriosclerosis Sex Retinopathy%眼底照相在高血压及动脉硬化性视网膜病变中的应用

    Institute of Scientific and Technical Information of China (English)

    欧春蓓

    2014-01-01

    目的:通过眼底血管的检查,探讨高血压病所导致高血压性视网膜病变的各种原因。方法全部患者均系统进行眼部检查,对有高血压动脉硬化性视网膜病变的患者,在全身情况允许下进行荧光眼底血管造影检查。结果106例212只患眼中高血压患者都有不同程度的眼底视网膜病变。结论通过对高血压患者眼底检查,可为高血压患者留下非常直观的资料,对高血压病的诊断治疗及预后提供重要的临床参考价值。%Objective: through checking the retinal blood vessels and explore the hypertension caused by various reasons for the hypertensive retinopathy. Methods: al patients carries on the eye test system, for patients with high blood pressure, arteriosclerosis sex retinopathy, under general situation al ows for fluorescence fundus angiography. Results:106 cases with 106 eyes hypertension patients have dif erent degree of fundus retinopathy. Conclusion:through to hypertension patients with fundus examination, can leave very intuitive information for patients with high blood pressure, for the diagnosis and treatment and prognosis of hypertension provide important clinical reference value.

  6. Automated Detection and Differentiation of Drusen, Exudates, and Cotton-wool Spots in Digital Color Fundus Photographs for Early Diagnosis of Diabetic Retinopathy IOVS-06-0996 accepted version

    Science.gov (United States)

    Niemeijer, Meindert; van Ginneken, Bram; Russell, Stephen R.; Suttorp-Schulten, Maria S.A.

    2008-01-01

    Purpose To describe and evaluate a machine learning based, automated system to detect exudates and cotton-wool spots in digital color fundus photographs, and differentiate them from drusen, for early diagnosis of diabetic retinopathy. Methods Three hundred retinal images from one eye of three hundred patients with diabetes were selected from a diabetic retinopathy telediagnosis database (non-mydriatic camera two field photography); 100 with previously diagnosed ‘bright’ lesions, and 200 without. A machine learning computer program was developed that can identify and differentiate among drusen, (hard) exudates, and cotton-wool spots. A human expert standard for the 300 images was obtained by consensus annotation by two retinal specialists. Sensitivities and specificities of the annotations on the 300 images by the automated system and a third retinal specialist were determined. Results The system achieved an area under the ROC curve of 0.95 and sensitivity/specificity pairs of 0.95/0.88 for the detection of ‘bright’ lesions of any type, and 0.95/0.86, 0.70/0.93 and 0.77/0.88 for the detection of exudates, cotton-wool spots and drusen, respectively. The third retinal specialist achieved pairs of 0.95/0.74 for ‘bright’ lesions, and 0.90/0.98, 0.87/0.98 and 0.92/0.79 per lesion type. Conclusions An machine learning based, automated system capable of detecting exudates and cotton-wool spots and differentiating them from drusen in color images obtained in community based diabetic patients has been developed and approaches the performance level of that of retinal experts. If the machine learning can be improved with additional training datasets, it may be useful to detect clinically important ‘bright’ lesions, enhance early diagnosis and reduce suffering from visual loss in patients with diabetes. PMID:17460289

  7. Local resolved spectroscopy at the human ocular fundus in vivo: technique and clinical examples

    Science.gov (United States)

    Hammer, Martin; Schweitzer, Dietrich; Scibor, Mateusz

    1996-01-01

    Ocular fundus reflectometry is known as a method for the determination of the optical density of pigments at the eye ground. This has been described for diagnostic investigations at single locations. The new technique of imaging spectroscopy enables the recording of one dimensional local distribution of spectra from the fundus which is illuminated confocal to the entrance slit of a spectrograph. A fundus reflectometer consisting of a Zeiss fundus camera, an imaging spectrograph, and an intensified CCD-camera are presented. The local resolved spectra gained by this apparatus are approximated by a mathematical model on the basis of the anatomy of the fundus as a structure of layers with different optical properties. Each spectrum is assumed to be described by a function of the absorption spectra of the pigments found in the retinal and choroidal tissue. Assuming the existence of parameters which are independent from the fundus location we have to approximate the measured local distribution of spectra by a system of coupled non-linear equations. By a least square fit the local distribution of the extinction of melanin, xantophyll and hemoglobin may be obtained as well as the extension of pathologic alterations at the fundus. The benefits of the method for clinical diagnostics are discussed at first measurements from physiological and pathological examples.

  8. High-Procession Eye Tracking Using Fundus Images

    Science.gov (United States)

    Mulligan, Jeffrey B.

    1996-01-01

    Fundus images provide high optical gain for eye movement tracking, i.e. large image displacements occur as a result of small eye rotations. Subpixel registration techniques can provide resolution better than 1 arc minute using images acquired with a CCD camera. Ocular torsion may also be estimated, with a precision of approximately 0.1 degree. This talk will discuss the software algorithms used to attain this performance.

  9. Comparison between Early Treatment Diabetic Retinopathy Study 7-field retinal photos and non-mydriatic, mydriatic and mydriatic steered widefield scanning laser ophthalmoscopy for assessment of diabetic retinopathy

    DEFF Research Database (Denmark)

    Rasmussen, Malin Lundberg; Broe, Rebecca; Frydkjaer-Olsen, Ulrik;

    2015-01-01

    AIMS: To compare non-mydriatic, mydriatic and steered mydriatic widefield retinal images with mydriatic 7-field Early Treatment Diabetic Retinopathy Study (ETDRS)-standards in grading diabetic retinopathy (DR). METHODS: We examined 95 patients (190 eyes) with type 1 diabetes. A non-mydriatic, a m...

  10. Fundus Photography in the 21st Century—A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare

    Science.gov (United States)

    Panwar, Nishtha; Huang, Philemon; Lee, Jiaying; Keane, Pearse A.; Chuan, Tjin Swee; Richhariya, Ashutosh; Teoh, Stephen; Lim, Tock Han

    2016-01-01

    Abstract Background: The introduction of fundus photography has impacted retinal imaging and retinal screening programs significantly. Literature Review: Fundus cameras play a vital role in addressing the cause of preventive blindness. More attention is being turned to developing countries, where infrastructure and access to healthcare are limited. One of the major limitations for tele-ophthalmology is restricted access to the office-based fundus camera. Results: Recent advances in access to telecommunications coupled with introduction of portable cameras and smartphone-based fundus imaging systems have resulted in an exponential surge in available technologies for portable fundus photography. Retinal cameras in the near future would have to cater to these needs by featuring a low-cost, portable design with automated controls and digitalized images with Web-based transfer. Conclusions: In this review, we aim to highlight the advances of fundus photography for retinal screening as well as discuss the advantages, disadvantages, and implications of the various technologies that are currently available. PMID:26308281

  11. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    Science.gov (United States)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work

  12. Determining the size of retinal features in prematurely born children by fundus photography.

    Science.gov (United States)

    Knaapi, Laura; Lehtonen, Tuomo; Vesti, Eija; Leinonen, Markku T

    2015-06-01

    The purpose was to study the effect of prematurity on the macula-disc centre distance and whether it could be used as a reference tool for determining the size of retinal features in prematurely born children by fundus photography. The macula-disc centre distance of the left eye was measured in pixels from digital fundus photographs taken from 27 prematurely born children aged 10-11 years with Topcon fundus camera. A conversion factor for Topcon fundus camera (194.98 pixel/mm for a 50° lens) was used to convert the results in pixels into metric units. The macula-disc centre distance was 4.74 mm, SD 0.29. No correlation between ametropia and the macula-disc centre distance was found (r = -0.07, p > 0.05). One child (subject 20) had high myopia and retinopathy of prematurity (ROP), and the macula-disc centre distance was longer than average (6.35 mm). The macula-disc centre distance in prematurely born children at the age of 10-11 years provides an easy-to-use reference tool for evaluating the size of retinal features on fundus photographs. However, if complications of ROP, for example temporal macular dragging or high ametropia, are present, the macula-disc centre distance is potentially altered and a personal macula-disc centre distance should be determined and used as a refined reference tool. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Fundus imaging with a nasal endoscope

    Directory of Open Access Journals (Sweden)

    P Mahesh Shanmugam

    2015-01-01

    Full Text Available Wide field fundus imaging is needed to diagnose, treat, and follow-up patients with retinal pathology. This is more applicable for pediatric patients as repeated evaluation is a challenge. The presently available imaging machines though provide high definition images, but carry the obvious disadvantages of either being costly or bulky or sometimes both, which limits its usage only to large centers. We hereby report a technique of fundus imaging using a nasal endoscope coupled with viscoelastic. A regular nasal endoscope with viscoelastic coupling was placed on the cornea to image the fundus of infants under general anesthesia. Wide angle fundus images of various fundus pathologies in infants could be obtained easily with readily available instruments and without the much financial investment for the institutes.

  14. Fundus Findings in Wernicke Encephalopathy

    Directory of Open Access Journals (Sweden)

    Tal Serlin

    2017-07-01

    Full Text Available Wernicke encephalopathy (WE is an acute neuropsychiatric syndrome resulting from thiamine (vitamin B1 deficiency, classically characterized by the triad of ophthalmoplegia, confusion, and ataxia. While commonly associated with chronic alcoholism, WE may also occur in the setting of poor nutrition or absorption. We present a 37-year-old woman who underwent laparoscopic sleeve gastrectomy and presented with visual disturbance with bilateral horizontal nystagmus, confusion, and postural imbalance. Fundus examination revealed bilateral optic disc edema with a retinal hemorrhage in the left eye. Metabolic workup demonstrated thiamine deficiency. Her symptoms resolved after thiamine treatment. This case raises the awareness of the possibility of posterior segment findings in WE, which are underreported in WE.

  15. Investigating the influence of chromatic aberration and optical illumination bandwidth on fundus imaging in rats

    Science.gov (United States)

    Li, Hao; Liu, Wenzhong; Zhang, Hao F.

    2015-10-01

    Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.

  16. Laparoscopic retrograde (fundus first cholecystectomy

    Directory of Open Access Journals (Sweden)

    Kelly Michael D

    2009-12-01

    Full Text Available Abstract Background Retrograde ("fundus first" dissection is frequently used in open cholecystectomy and although feasible in laparoscopic cholecystectomy (LC it has not been widely practiced. LC is most simply carried out using antegrade dissection with a grasper to provide cephalad fundic traction. A series is presented to investigate the place of retrograde dissection in the hands of an experienced laparoscopic surgeon using modern instrumentation. Methods A prospective record of all LCs carried out by an experienced laparoscopic surgeon following his appointment in Bristol in 2004 was examined. Retrograde dissection was resorted to when difficulties were encountered with exposure and/or dissection of Calot's triangle. Results 1041 LCs were carried out including 148 (14% emergency operations and 131 (13% associated bile duct explorations. There were no bile duct injuries although conversion to open operation was required in six patients (0.6%. Retrograde LC was attempted successfully in 11 patients (1.1%. The age ranged from 28 to 80 years (mean 61 and there were 7 males. Indications were; fibrous, contracted gallbladder 7, Mirizzi syndrome 2 and severe kyphosis 2. Operative photographs are included to show the type of case where it was needed and the technique used. Postoperative stay was 1/2 to 5 days (mean 2.2 with no delayed sequelae on followup. Histopathology showed; chronic cholecystitis 7, xanthogranulomatous cholecystitis 3 and acute necrotising cholecystitis 1. Conclusions In this series, retrograde laparoscopic dissection was necessary in 1.1% of LCs and a liver retractor was needed in 9 of the 11 cases. This technique does have a place and should be in the armamentarium of the laparoscopic surgeon.

  17. Fundus reflectance : historical and present ideas

    NARCIS (Netherlands)

    Berendschot, T.T.J.M.; Delint, P.J.; Norren, D. van

    2003-01-01

    In 1851 Helmholtz introduced the ophthalmoscope. The instrument allowed the observation of light reflected at the fundus. The development of this device was one of the major advancements in ophthalmology. Yet ophthalmoscopy allows only qualitative observation of the eye. Since 1950 attempts were mad

  18. Fundus autofluorescence applications in retinal imaging

    Directory of Open Access Journals (Sweden)

    Andrea Gabai

    2015-01-01

    Full Text Available Fundus autofluorescence (FAF is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications.

  19. Fundus changes in central retinal vein occlusion.

    Science.gov (United States)

    Hayreh, Sohan Singh; Zimmerman, M Bridget

    2015-01-01

    To investigate systematically the retinal and optic disk changes in central retinal vein occlusion (CRVO) and their natural history. This study comprised 562 consecutive patients with CRVO (492 nonischemic [NI-CRVO] and 89 ischemic CRVO [I-CRVO] eyes) seen within 3 months of onset. Ophthalmic evaluation at initial and follow-up visits included recording visual acuity, visual fields, and detailed anterior segment and fundus examinations and fluorescein fundus angiography. Retinal and subinternal limiting membrane hemorrhages and optic disk edema in I-CRVO were initially more marked (P retinal epithelial pigment degeneration, serous macular detachment, and retinal perivenous sheathing developed at a higher rate in I-CRVO than that in NI-CRVO (P retinal venous engorgement than NI-CRVO (P = 0.003). Fluorescein fundus angiography showed significantly more fluorescein leakage, retinal capillary dilatation, capillary obliteration, and broken capillary foveal arcade (P < 0.0001) in I-CRVO than NI-CRVO. Resolution time of CRVO was longer for I-CRVO than NI-CRVO (P < 0.0001). Characteristics and natural history of fundus findings in the two types of CRVO are different.

  20. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Chaum, Edward [ORNL; Karnowski, Thomas Paul [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Abramoff, M.D. [University of Iowa

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  1. Glaucoma Detection From Fundus Image Using Opencv

    Directory of Open Access Journals (Sweden)

    K. Narasimhan

    2012-12-01

    Full Text Available This study proposes a semi automated method for glaucoma detection using CDR and ISNT ratio of a fundus image. CDR (Cup to Disc Ratio is ratio of area of Optic Cup to area of Optic Disc. For a patient with glaucoma Optic Cup size increases while the Optic Disc size remains same and hence CDR will be high for glaucoma patient than normal fundus image. The ROI of green plane is taken and K-Means clustering technique is recursively applied and Optic Disc and Optic Cup is segmented. Through elliptic fiiting, area of Optic Disc and Cup is determined and hence CDR is calculated. ISNT is another parameter used for the diagnosis of glaucoma which is determined through the ratio of area of blood vessels in Inferior Superior to Nasal Temporal side. Blood vessels will shift to Nasal side for glaucoma patients, hence value will be less for glaucoma patient than normal fundus image. Matched filter and Local entropy thresholding is applied to extract blood vessels. The code is programmed in C++ using OpenCV library functions. OpenCV (Open Source Computer Vision Library is a library of programming functions developed by Intel. Core, highgui, imgproc, ml are the main libraries used from OpenCV. The optimized functions in OpenCV increase the speed of operation and is very much suitable for real time mass screening purpose. A batch of 50 retinal images (25 normal set and 25 abnormal set obtained from the Aravind Eye Hospital, is used to assess the performance of the proposed system.

  2. GANGRENE OF THE FUNDUS OF STOMACH

    Directory of Open Access Journals (Sweden)

    Sribatsa Kumar

    2014-10-01

    Full Text Available : The stomach is well known for its rich vascular network which generally protects it from ischemia. So gangrene of the fundus of stomach is a rare event, Its cause has been attributed to gastric volvules, intrathoracic herniation of stomach through the diaphragm, psychogenic polyphagia resulting in massive gastric dilation, ingestion of corrosive materials, embolization of atherosclerotic plague, thrombosis of major arterial supply occlusion of gastric vessels by therapeutically injected foreign bodies and necrotizing gastritis caused by organisms. We report a case of gangrene of funds of stomach that appears to be caused by intake of bhanga. (cannabis sativa

  3. Assessment of diabetic retinopathy using nonmydriatic ultra-widefield scanning laser ophthalmoscopy (Optomap) compared with ETDRS 7-field stereo photography.

    Science.gov (United States)

    Kernt, Marcus; Hadi, Indrawati; Pinter, Florian; Seidensticker, Florian; Hirneiss, Christoph; Haritoglou, Christos; Kampik, Anselm; Ulbig, Michael W; Neubauer, Aljoscha S

    2012-12-01

    To compare the diagnostic properties of a nonmydriatic 200° ultra-widefield scanning laser ophthalmoscope (SLO) versus mydriatic Early Treatment of Diabetic Retinopathy Study (ETDRS) 7-field photography for diabetic retinopathy (DR) screening. A consecutive series of 212 eyes of 141 patients with different levels of DR were examined. Grading of DR and clinically significant macular edema (CSME) from mydriatic ETDRS 7-field stereo photography was compared with grading obtained by Optomap Panoramic 200 SLO images. All SLO scans were performed through an undilated pupil, and no additional clinical information was used for evaluation of all images by the two independent, masked, expert graders. Twenty-two eyes from ETDRS 7-field photography and 12 eyes from Optomap were not gradable by at least one grader because of poor image quality. A total of 144 eyes were analyzed regarding DR level and 155 eyes regarding CSME. For ETDRS 7-field photography, 22 eyes (18 for grader 2) had no or mild DR (ETDRS levels ≤ 20) and 117 eyes (111 for grader 2) had no CSME. A highly substantial agreement between both Optomap DR and CSME grading and ETDRS 7-field photography existed with κ = 0.79 for DR and 0.73 for CSME for grader 1, and κ = 0.77 (DR) and 0.77 (CSME) for grader 2. Determination of CSME and grading of DR level from Optomap Panoramic 200 nonmydriatic images show a positive correlation with mydriatic ETDRS 7-field stereo photography. Both techniques are of sufficient quality to assess DR and CSME. Optomap Panoramic 200 images cover a larger retinal area and therefore may offer additional diagnostic properties.

  4. Incorporating privileged genetic information for fundus image based glaucoma detection.

    Science.gov (United States)

    Duan, Lixin; Xu, Yanwu; Li, Wen; Chen, Lin; Wing, Damon Wing Kee; Wong, Tien Yin; Liu, Jiang

    2014-01-01

    Visual features extracted from retinal fundus images have been increasingly used for glaucoma detection, as those images are generally easy to acquire. In recent years, genetic researchers have found that some single nucleic polymorphisms (SNPs) play important roles in the manifestation of glaucoma and also show superiority over fundus images for glaucoma detection. In this work, we propose to use the SNPs to form the so-called privileged information and deal with a practical problem where both fundus images and privileged genetic information exist for the training subjects, while the test objects only have fundus images. To solve this problem, we present an effective approach based on the learning using privileged information (LUPI) paradigm to train a predictive model for the image visual features. Extensive experiments demonstrate the usefulness of our approach in incorporating genetic information for fundus image based glaucoma detection.

  5. Retinal oximetry with a multiaperture camera

    Science.gov (United States)

    Lemaillet, Paul; Lompado, Art; Ibrahim, Mohamed; Nguyen, Quan Dong; Ramella-Roman, Jessica C.

    2010-02-01

    Oxygen saturation measurements in the retina is an essential measurement in monitoring eye health of diabetic patient. In this paper, preliminary result of oxygen saturation measurements for a healthy patient retina is presented. The retinal oximeter used is based on a regular fundus camera to which was added an optimized optical train designed to perform aperture division whereas a filter array help select the requested wavelengths. Hence, nine equivalent wavelength-dependent sub-images are taken in a snapshot which helps minimizing the effects of eye movements. The setup is calibrated by using a set of reflectance calibration phantoms and a lookuptable (LUT) is computed. An inverse model based on the LUT is presented to extract the optical properties of a patient fundus and further estimate the oxygen saturation in a retina vessel.

  6. Multispectral imaging of the ocular fundus using light emitting diode illumination.

    Science.gov (United States)

    Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  7. Fundus albipunctatus associated with compound heterozygous mutations in RPE65

    DEFF Research Database (Denmark)

    Schatz, Patrik; Preising, Markus; Lorenz, Birgit

    2011-01-01

    To describe a family with an 18-year-old woman with fundus albipunctatus and compound heterozygous mutations in RPE65 whose unaffected parents and 1 female sibling harbored single heterozygous RPE65 mutations.......To describe a family with an 18-year-old woman with fundus albipunctatus and compound heterozygous mutations in RPE65 whose unaffected parents and 1 female sibling harbored single heterozygous RPE65 mutations....

  8. Learning deep similarity in fundus photography

    Science.gov (United States)

    Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew

    2017-02-01

    Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.

  9. Tower Camera

    Data.gov (United States)

    Oak Ridge National Laboratory — The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for...

  10. Cardiac cameras.

    Science.gov (United States)

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  11. Diagnostic fundus autofluorescence patterns in achromatopsia.

    Science.gov (United States)

    Fahim, Abigail T; Khan, Naheed W; Zahid, Sarwar; Schachar, Ira H; Branham, Kari; Kohl, Susanne; Wissinger, Bernd; Elner, Victor M; Heckenlively, John R; Jayasundera, Thiran

    2013-12-01

    To describe the unique diagnostic fundus autofluorescence (FAF) patterns in patients with achromatopsia and the associated findings on optical coherence tomography (OCT). Observational case series. We evaluated 10 patients with achromatopsia by means of best-corrected visual acuity (BCVA), ophthalmoscopy, Goldmann visual field, full-field electroretinography (ffERG), OCT, and FAF photography. FAF patterns were compared with patient age and foveal changes on OCT. Patients fell into two dichotomous age groups at the time of evaluation: six patients ranged from 11 to 23 years of age, and 3 patients ranged from 52 to 63 years of age. All patients had severely reduced photopic ffERG responses, including those exhibiting preserved foveal structure on OCT. The younger patients had absent to mild foveal atrophy on OCT, and four of the six demonstrated foveal and parafoveal hyperfluorescence on FAF. In addition, a 7-month-old child with compound heterozygous mutations in CNGA3 demonstrated similar foveal hyperfluorescence. The older patients demonstrated advanced foveal atrophy and punched-out foveal hypofluorescence with discrete borders on FAF imaging corresponding to the area of outer retinal cavitation on OCT. Foveal hyperfluorescence is an early sign of achromatopsia that can aid in clinical diagnosis. In our cohort, patients with achromatopsia demonstrated age-dependent changes in FAF, which are likely to be progressive and to correlate with foveal atrophy and cavitation on OCT. This finding may be useful in charting the natural course of the disease and in defining a therapeutic window for treatment. Copyright © 2013. Published by Elsevier Inc.

  12. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients.

    Science.gov (United States)

    Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall

    2017-01-01

    A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).

  13. Multi-modal adaptive optics system including fundus photography and optical coherence tomography for the clinical setting.

    Science.gov (United States)

    Salas, Matthias; Drexler, Wolfgang; Levecq, Xavier; Lamory, Barbara; Ritter, Markus; Prager, Sonja; Hafner, Julia; Schmidt-Erfurth, Ursula; Pircher, Michael

    2016-05-01

    We present a new compact multi-modal imaging prototype that combines an adaptive optics (AO) fundus camera with AO-optical coherence tomography (OCT) in a single instrument. The prototype allows acquiring AO fundus images with a field of view of 4°x4° and with a frame rate of 10fps. The exposure time of a single image is 10 ms. The short exposure time results in nearly motion artifact-free high resolution images of the retina. The AO-OCT mode allows acquiring volumetric data of the retina at 200kHz A-scan rate with a transverse resolution of ~4 µm and an axial resolution of ~5 µm. OCT imaging is acquired within a field of view of 2°x2° located at the central part of the AO fundus image. Recording of OCT volume data takes 0.8 seconds. The performance of the new system is tested in healthy volunteers and patients with retinal diseases.

  14. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  15. Weighted ensemble based automatic detection of exudates in fundus photographs.

    Science.gov (United States)

    Prentasic, Pavle; Loncaric, Sven

    2014-01-01

    Diabetic retinopathy (DR) is a visual complication of diabetes, which has become one of the leading causes of preventable blindness in the world. Exudate detection is an important problem in automatic screening systems for detection of diabetic retinopathy using color fundus photographs. In this paper, we present a method for detection of exudates in color fundus photographs, which combines several preprocessing and candidate extraction algorithms to increase the exudate detection accuracy. The first stage of the method consists of an ensemble of several exudate candidate extraction algorithms. In the learning phase, simulated annealing is used to determine weights for combining the results of the ensemble candidate extraction algorithms. The second stage of the method uses a machine learning-based classification for detection of exudate regions. The experimental validation was performed using the DRiDB color fundus image set. The validation has demonstrated that the proposed method achieved higher accuracy in comparison to state-of-the art methods.

  16. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  17. Fundus Findings in Dengue Fever: A Case Report.

    Science.gov (United States)

    Şahan, Berna; Tatlıpınar, Sinan; Marangoz, Deniz; Çiftçi, Ferda

    2015-10-01

    Dengue fever is a flavivirus infection transmitted through infected mosquitoes, and is endemic in Southeast Asia, Central and South America, the Pacific, Africa and the Eastern Mediterranean region. A 41-year-old male patient had visual impairment after travelling to Thailand, which is one of the endemic areas. Cotton wool spots were observed on fundus examination. Fundus fluorescein angiography showed minimal vascular leakage from areas near the cotton wool spots and dot hemorrhages in the macula. Dengue fever should be considered in patients with visual complaints who traveled to endemic areas of dengue fever.

  18. Referral system for hard exudates in eye fundus.

    Science.gov (United States)

    Naqvi, Syed Ali Gohar; Zafar, Muhammad Faisal; Haq, Ihsan ul

    2015-09-01

    Hard exudates are one of the most common anomalies/artifacts found in the eye fundus of patients suffering from diabetic retinopathy. These exudates are the major cause of loss of sight or blindness in people having diabetic retinopathy. Diagnosis of hard exudates requires considerable time and effort of an ophthalmologist. The ophthalmologists have become overloaded, so that there is a need for an automated diagnostic/referral system. In this paper a referral system for the hard exudates in the eye-fundus images has been presented. The proposed referral system works by combining different techniques like Scale Invariant Feature Transform (SIFT), K-means Clustering, Visual Dictionaries and Support Vector Machine (SVM). The system was also tested with Back Propagation Neural Network as a classifier. To test the performance of the system four fundus image databases were used. One publicly available image database was used to compare the performance of the system to the existing systems. To test the general performance of the system when the images are taken under different conditions and come from different sources, three other fundus image databases were mixed. The evaluation of the system was also performed on different sizes of the visual dictionaries. When using only one fundus image database the area under the curve (AUC) of maximum 0.9702 (97.02%) was achieved with accuracy of 95.02%. In case of mixed image databases an AUC of 0.9349 (93.49%) was recorded having accuracy of 87.23%. The results were compared to the existing systems and were found better/comparable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. FUNDUS CHANGES IN PREGNANCY INDUCED HYPERTENSION: A CLINICAL STUDY

    Directory of Open Access Journals (Sweden)

    Rama Bharathi

    2015-01-01

    Full Text Available PURPOSE: To estimate the prevalence of fundus changes in Pregnancy Induced Hypertension (PIH and to find the correlation of the findings with the levels of hypertension and with the severity of the disease. METHODS: This was a hospital based cross section al study conducted over a period of one year from July 2012 to June 2013. 150 patients with diagnosed PIH and admission into wards at King George Hospital, Visakhapatnam, with 36 weeks period of gestation and above, were included in the study. Those with p re - existing hypertension, coexisting diabetes mellitus, severe anaemia, renal disease and ocular diseases like cataract or corneal opacities were excluded from the study. After taking consent and ocular history, anterior segment was evaluated. Pupils were dilated with 0.5% tropicamide eye drops and fundus examination was done with direct ophthalmoscope. Information like age, para, BP etc., was noted down from case sheets. RESULTS: Total number of patients studied was 150.Mean age was 23.06+ 3.03years. 105 ( 70% were primigravidae and 45(30% were multigravidae. Fundus findings were seen in 35 cases (23.33%. 26 (17.33% had Grade I changes, 1 (0.66% had grade II changes, 6 (3.9% had grade III changes 2 (1.3% had serous retinal detachment/grade - IV. The de gree of retinopathy was correlating with the severity of the disease and levels of hypertension. CONCLUSION: The prevalence of fundus changes in PIH is 23.33%. Most of the fundus changes in PIH are underdiagnosed. Timely ophthalmoscopy should be called for in all cases of PIH as it would affect the decision of induction of delivery, thereby preventing other complications.

  20. Correlation between Optic Nerve Parameters Obtained Using 3D Nonmydriatic Retinal Camera and Optical Coherence Tomography: Interobserver Agreement on the Disc Damage Likelihood Scale

    Directory of Open Access Journals (Sweden)

    Jae Wook Han

    2014-01-01

    Full Text Available Purpose. To compare stereometric parameters obtained by three-dimensional (3D optic disc photography and optical coherence tomography (OCT and assess interobserver agreement on the disc damage likelihood scale (DDLS. Methods. This retrospective study included 190 eyes from 190 patients classified as normal, glaucoma suspect, or glaucomatous. Residents at different levels of training completed the DDLS for each patient before and after attending a training module. 3D optic disc photography and OCT were performed on each eye, and correlations between the DDLS and various parameters obtained by each device were calculated. Results. We found moderate agreement (weighted kappa value, 0.59 ± 0.03 between DDLS scores obtained by 3D optic disc photography and the glaucoma specialist. The weighted kappa values for agreement and interobserver concordance increased among residents after the training module. Interobserver concordance was the poorest at DDLS stages 5 and 6. The DDLS scored by the glaucoma specialist had the highest predictability value (0.941. Conclusions. The DDLS obtained by 3D optic disc photography is a useful diagnostic tool for glaucoma. A supervised teaching program increased trainee interobserver agreement on the DDLS. DDLS stages 5 and 6 showed the poorest interobserver agreement, suggesting that caution is required when recording these stages.

  1. Correlation between Optic Nerve Parameters Obtained Using 3D Nonmydriatic Retinal Camera and Optical Coherence Tomography: Interobserver Agreement on the Disc Damage Likelihood Scale.

    Science.gov (United States)

    Han, Jae Wook; Cho, Soon Young; Kang, Kui Dong

    2014-01-01

    Purpose. To compare stereometric parameters obtained by three-dimensional (3D) optic disc photography and optical coherence tomography (OCT) and assess interobserver agreement on the disc damage likelihood scale (DDLS). Methods. This retrospective study included 190 eyes from 190 patients classified as normal, glaucoma suspect, or glaucomatous. Residents at different levels of training completed the DDLS for each patient before and after attending a training module. 3D optic disc photography and OCT were performed on each eye, and correlations between the DDLS and various parameters obtained by each device were calculated. Results. We found moderate agreement (weighted kappa value, 0.59 ± 0.03) between DDLS scores obtained by 3D optic disc photography and the glaucoma specialist. The weighted kappa values for agreement and interobserver concordance increased among residents after the training module. Interobserver concordance was the poorest at DDLS stages 5 and 6. The DDLS scored by the glaucoma specialist had the highest predictability value (0.941). Conclusions. The DDLS obtained by 3D optic disc photography is a useful diagnostic tool for glaucoma. A supervised teaching program increased trainee interobserver agreement on the DDLS. DDLS stages 5 and 6 showed the poorest interobserver agreement, suggesting that caution is required when recording these stages.

  2. The pathogenesis of the fundus peau d'orange and salmon spots.

    Science.gov (United States)

    Giuffrè, G

    1987-01-01

    The fundus of the eye of a patient with pseudoxanthoma elasticum showed angioid streaks, fundus peau d'orange and salmon spots, these latter unusually located in the macula. The fluorescein angiography revealed, in the arterial phase, a reticular hyperfluorescence in the areas of fundus peau d'orange and salmon spots. In the venous phase the fluorescence of the fundus peau d'orange was even, while the salmon spots showed staining and hyperfluorescent borders. These findings support the hypothesis that the fundus peau d'orange is due to degeneration of Bruch's membrane and the salmon spots are deiscences of this membrane.

  3. Cost-effective instrumentation for quantitative depth measurement of optic nerve head using stereo fundus image pair and image cross correlation techniques

    Science.gov (United States)

    de Carvalho, Luis Alberto V.; Carvalho, Valeria

    2014-02-01

    One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.

  4. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  5. Case Report of Bullous Pemphigoid following Fundus Fluorescein Angiography

    Directory of Open Access Journals (Sweden)

    Goktug Demirci

    2010-05-01

    Full Text Available Purpose: To report a first case of bullous pemphigoid (BP following intravenous fluorescein for fundus angiography. Clinical Features: A 70-year-old male patient was admitted to the intensive care unit with BP and sepsis. He reported a history of fundus fluorescein angiography with a pre-diagnosis of senile macular degeneration 2 months prior to presentation. At that time, fluorescein extravasated at the antecubital region. Following the procedure, pruritus and erythema began at the wrists bilaterally, and quickly spread to the entire body. The patient also reported a history of allergy to human albumin solution (Plamasteril®; Abbott 15 years before, during bypass surgery. On dermatologic examination, erythematous patches were present on the scalp, chest and anogenital region. Vesicles and bullous lesions were present on upper and lower extremities. On day 2 of hospitalization, tense bullae appeared on the upper and lower extremities. The patient was treated with oral methylprednisolone 48 mg (Prednol®; Mustafa Nevzat, topical clobetasol dipropionate 0.05% cream (Dermovate®; Glaxo SmithKline, and topical 4% urea lotion (Excipial Lipo®; Orva for presumptive bullous pemphigoid. Skin punch biopsy provided tissue for histopathology, direct immunofluorescence examination, and salt extraction, which were all consistent with BP. After 1 month, the patient was transferred to the intensive care unit with sepsis secondary to urinary tract infection; he died 2 weeks later from sepsis and cardiac failure. Conclusions: To our knowledge, this is the first reported case of BP following fundus fluorescein angiography in a patient with known human albumin solution allergy. Consideration should be made to avoid fluorescein angiography, change administration route, or premedicate with antihistamines in patients with known human albumin solution allergy. The association between fundus fluorescein angiography and BP should be further investigated.

  6. Fundus artery occlusion caused by cosmetic facial injections

    Institute of Scientific and Technical Information of China (English)

    Chen Yanyun; Wang Wenying; Li Jipeng; Yu Yajie; Li Lin; Lu Ning

    2014-01-01

    Background With the increasing popularity of cosmetic facial filler injections in recent years,more and more associated complications have been reported.However,the causative surgical procedures and preventative measures have not been studied well up to now.The aim of this stady was to investigate the clinical characteristics and visual prognosis of fundus artery occlusion resulting from cosmetic facial filler injections.Methods Thirteen consecutive patients with fundus artery occlusion caused by facial filler injections were included.Main outcome measures were filler materials,injection sites,best-corrected visual acuity (BCVA),fundus fluorescein angiography,and associated ocular and systemic manifestations.Results Eleven patients had ophthalmic artery occlusion (OAO) and one patient each had central retinal artery occlusion (CRAO) and anterior ischemic optic neuropathy (AION).Injected materials included autologous fat (seven cases),hyaluronic acid (five cases),and bone collagen (one case).Injection sites were the frontal area (five cases),periocular area (two cases),temple area (two cases),and nose area and nasal area (4 cases).Injected autologous fat was associated with worse final BCVA than hyaluronic acid.The BCVA of seven patients with autologous fat injection in frontal area and temple area was no light perception.Most of the patients with OAO had ocular pain,headache,ptosis,ophthalmoplegia,and no improvement in final BCVA.Conclusions Cosmetic facial injections can cause fundus artery occlusion.Autologous fat injection tends to be associated with painful blindness,ptosis,ophthalmoplegia,and poor visual outcomes.The prognosis is much worse with autologous fat injection than hyaluronic acid injection.

  7. [Therapy of fundus oculi vascular pathology by solcoseryl].

    Science.gov (United States)

    Eliseeva, E G; Vorob'eva, O K; Astaf'eva, N V

    1999-01-01

    Long (for more than 17 years) therapy of 2331 patients (3122 eyes) with vascular conditions of the fundus oculi by a retinotropic drug solcoseryl showed its high efficacy as a monotherapy and in complex with other traditional and symptomatic treatments. Solcoseryl improved the visual function and hemodynamics of retinal vessels, promoted a more stable and longer stabilization of the treatment results, and accelerated the rehabilitation of patients.

  8. Interactive segmentation for geographic atrophy in retinal fundus images

    OpenAIRE

    Lee, Noah; SMITH, R. THEODORE; Laine, Andrew F.

    2008-01-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12–21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of patho...

  9. In vivo diffuse correlation spectroscopy investigation of the ocular fundus

    Science.gov (United States)

    Cattini, Stefano; Staurenghi, Giovanni; Gatti, Antonietta; Rovati, Luigi

    2013-05-01

    Diffuse correlation spectroscopy (DCS) measurements in vivo recorded from rabbits' ocular fundus are presented. Despite the complexity of these ocular tissues, we provide a clear and simple demonstration of the DCS abilities to analyze variations in physiological quantities of clinical interest. Indeed, the reported experimental activities demonstrate that DCS can reveal both choroidal-flow and temperature variations and detect nano- and micro-aggregates in ocular fundus circulation. Such abilities can be of great interest both in fundamental research and practical clinical applications. The proposed measuring system can be useful in: (a) monitoring choroidal blood flow variations, (b) determining the end-point for photo-dynamic therapy and transpupillary thermo therapy and, (c) managing the dye injection and determining an end-point for dye-enhanced photothrombosis. Moreover, it could allow both diagnoses when the presence of nano- and micro-aggregates is related to specific diseases and verifying the effects of nanoparticle injection in nanomedicine. Even though the reported results demonstrate the applicability of DCS to investigate ocular fundus, a detailed and accurate investigation of the limits of detection is beyond the scope of this article.

  10. Effect of Resected Gastric Fundus Fat on Ghrelin Tissue Levels: A Prospective Study.

    Science.gov (United States)

    Durmuş, Ali; Durmuş, Ilgim; Abahuni, Melis; Karatepe, Oguzhan

    2017-01-01

    Introduction: Obesity is currently an important health problem that is rapidly increasing worldwide. In recent years, the number of obesity-related surgeries has increased. The most common type of obesity-related surgery is laparoscopic sleeve gastrectomy (LSG). The aim of this study was to compare the genetic expression of the hormone ghrelin in different parts of the stomach. Materials and Methods: Nineteen obese patients who underwent LSG were examined in this study. Fat tissue from two different parts of the stomach, the fundus and the upper part of the fundus, were analysed by enzyme-linked immunosorbent assay (ELISA). The ribonucleic acid (RNA) isolation, complementary DNA (cDNA) and real-time quantitative polymerase chain reaction (RQ-PCR) techniques were applied. Additionally, a human ghrelin ELISA kit was used to measure ghrelin in obese patients. The ghrelin levels of fat tissue from the fundus and upper part of the fundus were statistically compared. Results: In all 19 patients, the average ghrelin level in the fundus was greater than 30. The average ghrelin level of the fat pad, which is located in the upper part of the fundus, was greater than 30 for 4 patients; the average level was approximately 5 in the remaining patients. A statistically significant difference in the ghrelin level was found between the fundus and the fundus fat tissue. Collection of fundus fat tissue is not routinely performed during LSG. However, ghrelin hormone elevation in this tissue may require collection of fundus tissue during surgery. Celsius.

  11. Simple, Inexpensive Technique for High-Quality Smartphone Fundus Photography in Human and Animal Eyes

    OpenAIRE

    Haddock, Luis J.; Kim, David Y.; Shizuo Mukai

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily available in an ophthalmic practice. Methods:. Fundus images were captured with a smartphone and a 20D lens with or without a Koeppe lens. By using the coaxial light source of the phone, this system works as an indirect ophthalmoscope that creates a digital image of the fundus. The application wh...

  12. Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing

    OpenAIRE

    Peishan Dai; Hanwei Sheng; Jianmei Zhang; Ling Li; Jing Wu; Min Fan

    2016-01-01

    Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new ...

  13. [Top ten progressions of clinical research in fundus diseases in China].

    Science.gov (United States)

    2014-11-01

    Ten research items in the past five years representing the progression of clinical research in fundus diseases in China were voted by specialists from the Ocular Fundus Disease Group of Ophthalmology Society of Chinese Medical Association. Choroidal neovascular disease, pediatric retinal disease, polypoidal choroidal vasculopathy, intraocular malignant tumor, and intraocular infection caused by specific pathogens are covered. Novel treatment, like anti-VEGF medication, PDT, minimally invasive vitrectomy, and intraocular injection, establishment of the Clinical Research Center of New Drug Development, and the epidemiologic study of fundus diseases are also included. These landmark research progressions represent the power and influence of Chinese fundus disease scholars in the world.

  14. Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing.

    Science.gov (United States)

    Dai, Peishan; Sheng, Hanwei; Zhang, Jianmei; Li, Ling; Wu, Jing; Fan, Min

    2016-01-01

    Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.

  15. Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing

    Directory of Open Access Journals (Sweden)

    Peishan Dai

    2016-01-01

    Full Text Available Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.

  16. Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing

    Science.gov (United States)

    2016-01-01

    Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images. PMID:27688745

  17. A Web-based telemedicine system for diabetic retinopathy screening using digital fundus photography.

    Science.gov (United States)

    Wei, Jack C; Valentino, Daniel J; Bell, Douglas S; Baker, Richard S

    2006-02-01

    The purpose was to design and implement a Web-based telemedicine system for diabetic retinopathy screening using digital fundus cameras and to make the software publicly available through Open Source release. The process of retinal imaging and case reviewing was modeled to optimize workflow and implement use of computer system. The Web-based system was built on Java Servlet and Java Server Pages (JSP) technologies. Apache Tomcat was chosen as the JSP engine, while MySQL was used as the main database and Laboratory of Neuro Imaging (LONI) Image Storage Architecture, from the LONI-UCLA, as the platform for image storage. For security, all data transmissions were carried over encrypted Internet connections such as Secure Socket Layer (SSL) and HyperText Transfer Protocol over SSL (HTTPS). User logins were required and access to patient data was logged for auditing. The system was deployed at Hubert H. Humphrey Comprehensive Health Center and Martin Luther King/Drew Medical Center of Los Angeles County Department of Health Services. Within 4 months, 1500 images of more than 650 patients were taken at Humphrey's Eye Clinic and successfully transferred to King/Drew's Department of Ophthalmology. This study demonstrates an effective architecture for remote diabetic retinopathy screening.

  18. Characteristics of fundus autofluorescence in cystoid macular edema

    Institute of Scientific and Technical Information of China (English)

    PENG Xi-jia; SU Lan-ping

    2011-01-01

    Background Fundus autofluorescence (FAF) imaging is a fast and noninvasive technique developed over the last decade.The authors utilized fluorescent properties of lipofuscin to study the health and viability of the retinal pigment epithelium (RPE)-photoreceptor complex.Observing the intensity and distribution of FAF of various retinal diseases is helpful for ascertaining diagnosis and evaluating prognosis.In this study,we described the FAF characteristics of cystoid macular edema (CME).Methods Sixty-two patients (70 eyes) with CME were subjected to FAF and fundus fluorescein angiography (FFA) by a confocal scanning laser ophthalmoscope (Heidelberg Retina Angiograph 2 (HRA2)).Characteristics of FAF images were compared with FFA images.Results FAF intensity in normal subjects was highest at the posterior pole and dipped at the fovea.All cases of CME showed fluorescein dye accumulated into honeycomb-like spaces in macular and formated a typical petaloid pattern or atypical petaloid pattern in the late phases of the angiography.Sixty-one eyes with CME on FAF images showed mild or moderate hyperautofluorescence petaloid pattern in fovea,the FAF patterns of these CME was perfectly corresponding with shape in their FFA images;nine eyes with CME secondary to exudative age related macular degeneration (AMD) showed expansion of the hypoautofluorescence without petaloid pattern in macula.Conclusion FAF imaging can be used as a new rapid,non-invasive and ancillary technique in the diagnosis of the majority of CME,except for AMD and small part of other fundus diseases.

  19. Measurement of retinal blood flow in the rat by combining Doppler Fourier-domain optical coherence tomography with fundus imaging

    Science.gov (United States)

    Werkmeister, René M.; Vietauer, Martin; Knopf, Corinna; Fürnsinn, Clemens; Leitgeb, Rainer A.; Reitsamer, Herbert; Gröschl, Martin; Garhöfer, Gerhard; Vilser, Walthard; Schmetterer, Leopold

    2014-10-01

    A wide variety of ocular diseases are associated with abnormalities in ocular circulation. As such, there is considerable interest in techniques for quantifying retinal blood flow, among which Doppler optical coherence tomography (OCT) may be the most promising. We present an approach to measure retinal blood flow in the rat using a new optical system that combines the measurement of blood flow velocities via Doppler Fourier-domain optical coherence tomography and the measurement of vessel diameters using a fundus camera-based technique. Relying on fundus images for extraction of retinal vessel diameters instead of OCT images improves the reliability of the technique. The system was operated with an 841-nm superluminescent diode and a charge-coupled device camera that could be operated at a line rate of 20 kHz. We show that the system is capable of quantifying the response of 100% oxygen breathing on the retinal blood flow. In six rats, we observed a decrease in retinal vessel diameters of 13.2% and a decrease in retinal blood velocity of 42.6%, leading to a decrease in retinal blood flow of 56.7%. Furthermore, in four rats, the response of retinal blood flow during stimulation with diffuse flicker light was assessed. Retinal vessel diameter and blood velocity increased by 3.4% and 28.1%, respectively, leading to a relative increase in blood flow of 36.2%;. The presented technique shows much promise to quantify early changes in retinal blood flow during provocation with various stimuli in rodent models of ocular diseases in rats.

  20. Glaucoma detection based on local binary patterns in fundus photographs

    Science.gov (United States)

    Alsheh Ali, Maya; Hurtut, Thomas; Faucon, Timothée.; Cheriet, Farida

    2014-03-01

    Glaucoma, a group of diseases that lead to optic neuropathy, is one of the most common reasons for blindness worldwide. Glaucoma rarely causes symptoms until the later stages of the disease. Early detection of glaucoma is very important to prevent visual loss since optic nerve damages cannot be reversed. To detect glaucoma, purely data-driven techniques have advantages, especially when the disease characteristics are complex and when precise image-based measurements are difficult to obtain. In this paper, we present our preliminary study for glaucoma detection using an automatic method based on local texture features extracted from fundus photographs. It implements the completed modeling of Local Binary Patterns to capture representative texture features from the whole image. A local region is represented by three operators: its central pixel (LBPC) and its local differences as two complementary components, the sign (which is the classical LBP) and the magnitude (LBPM). An image texture is finally described by both the distribution of LBP and the joint-distribution of LBPM and LBPC. Our images are then classified using a nearest-neighbor method with a leave-one-out validation strategy. On a sample set of 41 fundus images (13 glaucomatous, 28 non-glaucomatous), our method achieves 95:1% success rate with a specificity of 92:3% and a sensitivity of 96:4%. This study proposes a reproducible glaucoma detection process that could be used in a low-priced medical screening, thus avoiding the inter-experts variability issue.

  1. The grey fovea sign of macular oedema or subfoveal fluid on non-stereoscopic fundus photographs

    DEFF Research Database (Denmark)

    Hasler, Pascal W; Soliman, Wael; Sander, Birgit

    2017-01-01

    PURPOSE: To describe the grey fovea sign of fovea-involving macular oedema or subretinal fluid accumulation in red-free fundus photography. METHODS: A test set of 91 digital fundus photographs of good quality from 100 consecutive eyes in 72 patients with diabetic retinopathy or central serous cho...

  2. Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes.

    Science.gov (United States)

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily available in an ophthalmic practice. Methods. Fundus images were captured with a smartphone and a 20D lens with or without a Koeppe lens. By using the coaxial light source of the phone, this system works as an indirect ophthalmoscope that creates a digital image of the fundus. The application whose software allows for independent control of focus, exposure, and light intensity during video filming was used. With this app, we recorded high-definition videos of the fundus and subsequently extracted high-quality, still images from the video clip. Results. The described technique of smartphone fundus photography was able to capture excellent high-quality fundus images in both children under anesthesia and in awake adults. Excellent images were acquired with the 20D lens alone in the clinic, and the addition of the Koeppe lens in the operating room resulted in the best quality images. Successful photodocumentation of rabbit fundus was achieved in control and experimental eyes. Conclusion. The currently described system was able to take consistently high-quality fundus photographs in patients and in animals using readily available instruments that are portable with simple power sources. It is relatively simple to master, is relatively inexpensive, and can take advantage of the expanding mobile-telephone networks for telemedicine.

  3. Differential diagnosis in cases of saddle-like impression of the fundus of the stomach

    Energy Technology Data Exchange (ETDEWEB)

    Hohenberg, G.; Deimer, E.; Schmidmeier, L.

    1984-01-01

    X-ray examination of the stomach sometimes shows a saddle-like impression of the region of the fundus. This phenomenon is without any clinical importance, but there are many diseases such as hiatal hernia, benign and malignant tumours, inflammatory diseases, and varices which are localised at the fundus. Differential diagnostic problems are discussed.

  4. Simple, Inexpensive Technique for High-Quality Smartphone Fundus Photography in Human and Animal Eyes

    Directory of Open Access Journals (Sweden)

    Luis J. Haddock

    2013-01-01

    Full Text Available Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily available in an ophthalmic practice. Methods. Fundus images were captured with a smartphone and a 20D lens with or without a Koeppe lens. By using the coaxial light source of the phone, this system works as an indirect ophthalmoscope that creates a digital image of the fundus. The application whose software allows for independent control of focus, exposure, and light intensity during video filming was used. With this app, we recorded high-definition videos of the fundus and subsequently extracted high-quality, still images from the video clip. Results. The described technique of smartphone fundus photography was able to capture excellent high-quality fundus images in both children under anesthesia and in awake adults. Excellent images were acquired with the 20D lens alone in the clinic, and the addition of the Koeppe lens in the operating room resulted in the best quality images. Successful photodocumentation of rabbit fundus was achieved in control and experimental eyes. Conclusion. The currently described system was able to take consistently high-quality fundus photographs in patients and in animals using readily available instruments that are portable with simple power sources. It is relatively simple to master, is relatively inexpensive, and can take advantage of the expanding mobile-telephone networks for telemedicine.

  5. Comparison of Color Fundus Photography, Infrared Fundus Photography, and Optical Coherence Tomography in Detecting Retinal Hamartoma in Patients with Tuberous Sclerosis Complex

    Institute of Scientific and Technical Information of China (English)

    Da-Yong Bai; Xu Wang; Jun-Yang Zhao; Li Li; Jun Gao; Ning-Li Wang

    2016-01-01

    Background:A sensitive method is required to detect retinal hamartomas in patients with tuberous sclerosis complex (TSC).The aim of the present study was to compare the color fundus photography,infrared imaging (IFG),and optical coherence tomography (OCT) in the detection rate of retinal hamartoma in patients with TSC.Methods:This study included 11 patients (22 eyes) with TSC,who underwent color fundus photography,IFG,and spectral-domain OCT to detect retinal hamartomas.TSC1 and TSC2 mutations were tested in eight patients.Results:The mean age of the 11 patients was 8.0 ± 2.1 years.The mean spherical equivalent was-0.55 ± 1.42 D by autorefraction with cycloplegia.In 11 patients (22 eyes),OCT,infrared fundus photography,and color fundus photography revealed 26,18,and 9 hamartomas,respectively.The predominant hamartoma was type I (55.6%).All the hamartomas that detected by color fundus photography or IFG can be detected by OCT.Conclusion:Among the methods of color fundus photography,IFG,and OCT,the OCT has higher detection rate for retinal hamartoma in TSC patients;therefore,OCT might be promising for the clinical diagnosis of TSC.

  6. Interconnected network of cameras

    Science.gov (United States)

    Hosseini Kamal, Mahdad; Afshari, Hossein; Leblebici, Yusuf; Schmid, Alexandre; Vandergheynst, Pierre

    2013-02-01

    The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the number of their incorporating cameras increases. The customary approach to implementation of such system is a central type, where all the raw stream from the camera are first stored then processed for their target application. An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or no processing capability. Smart cameras with intra and inter camera processing capability and programmability at the software and hardware level will offer the right platform for distributed and parallel processing for multi- camera systems real-time application development. Inter camera processing requires the interconnection of smart cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection network of camera construction and analysis. A sample application is developed and demonstrated.

  7. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  8. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  9. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  10. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  11. Statistical Characterization and Segmentation of Drusen in Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Karnowski, Thomas Paul [ORNL; Aykac, Deniz [ORNL; Giancardo, Luca [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Nichols, Trent L [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Age related Macular Degeneration (AMD) is a disease of the retina associated with aging. AMD progression in patients is characterized by drusen, pigmentation changes, and geographic atrophy, which can be seen using fundus imagery. The level of AMD is characterized by standard scaling methods, which can be somewhat subjective in practice. In this work we propose a statistical image processing approach to segment drusen with the ultimate goal of characterizing the AMD progression in a data set of longitudinal images. The method characterizes retinal structures with a statistical model of the colors in the retina image. When comparing the segmentation results of the method between longitudinal images with known AMD progression and those without, the method detects progression in our longitudinal data set with an area under the receiver operating characteristics curve of 0.99.

  12. [Systemic cardiovascular risk assessment. Conventional or eye fundus-based?].

    Science.gov (United States)

    Wolf, A; Kernt, M; Kampik, A; Neubauer, A S

    2010-09-01

    Several systemic cardiovascular (CV) risk assessment algorithms exist, of which the ESC HeartScore, Framingham and PROCAM are the most frequently applied in Germany. The risk estimates generated differ and take a number of different risk factors into consideration. Due to existing homology of retinal vessels and brain vessels, eye fundus examination is a promising approach to improving risk prediction. Large cohort studies investigated retinal vascular changes, including arteriovenous ratio, as well as signs of retinopathy such as cotton-wool spots, microaneurysms, or retinal hemorrhages for their ability to predict systemic cardiovascular events. While signs of retinopathy proved to have high predictive power (but are rarely diagnosed,) the retinal vascular changes investigated could contribute little to enhancing systemic CV risk prediction. A number of new and promising approaches based on static and dynamic retinal analysis exist, but still need to be validated prospectively.

  13. Does Fundus Fluorescein Angiography Procedure Affect Ocular Pulse Amplitude?

    Directory of Open Access Journals (Sweden)

    Gökhan Pekel

    2013-01-01

    Full Text Available Purpose. This study examines the effects of fundus fluorescein angiography (FFA procedure on ocular pulse amplitude (OPA and intraocular pressure (IOP. Materials and Methods. Sixty eyes of 30 nonproliferative diabetic retinopathy patients (15 males, 15 females were included in this cross-sectional case series. IOP and OPA were measured with the Pascal dynamic contour tonometer before and after 5 minutes of intravenous fluorescein dye injection. Results. Pre-FFA mean OPA value was  mmHg and post-FFA mean OPA value was  mmHg (. Pre-FFA mean IOP value was  mmHg and post-FFA mean IOP value was  mmHg (. Conclusion. Although both mean OPA and IOP values were decreased after FFA procedure, the difference was not statistically significant. This clinical trial is registered with Australian New Zealand Clinical Trials Registry number ACTRN12613000433707.

  14. Statistical characterization and segmentation of drusen in fundus images.

    Science.gov (United States)

    Santos-Villalobos, H; Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E

    2011-01-01

    Age related Macular Degeneration (AMD) is a disease of the retina associated with aging. AMD progression in patients is characterized by drusen, pigmentation changes, and geographic atrophy, which can be seen using fundus imagery. The level of AMD is characterized by standard scaling methods, which can be somewhat subjective in practice. In this work we propose a statistical image processing approach to segment drusen with the ultimate goal of characterizing the AMD progression in a data set of longitudinal images. The method characterizes retinal structures with a statistical model of the colors in the retina image. When comparing the segmentation results of the method between longitudinal images with known AMD progression and those without, the method detects progression in our longitudinal data set with an area under the receiver operating characteristics curve of 0.99.

  15. Evaluating intensified camera systems

    Energy Technology Data Exchange (ETDEWEB)

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  16. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  17. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  18. Identification and localization of fovea on colour fundus images using blur scales.

    Science.gov (United States)

    Ganesan, Karthikeyan; Acharya, Rajendra U; Chua, Chua Kuang; Laude, Augustinus

    2014-09-01

    Identification of retinal landmarks is an important step in the extraction of anomalies in retinal fundus images. In the current study, we propose a technique to identify and localize the position of macula and hence the fovea avascular zone, in colour fundus images. The proposed method, based on varying blur scales in images, is independent of the location of other anatomical landmarks present in the fundus images. Experimental results have been provided using the open database MESSIDOR by validating our segmented regions using the dice coefficient, with ground truth segmentation provided by a human expert. Apart from testing the images on the entire MESSIDOR database, the proposed technique was also validated using 50 normal and 50 diabetic retinopathy chosen digital fundus images from the same database. A maximum overlap accuracy of 89.6%-93.8% and locational accuracy of 94.7%-98.9% was obtained for identification and localization of the fovea.

  19. Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes

    National Research Council Canada - National Science Library

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily...

  20. Simple, Inexpensive Technique for High-Quality Smartphone Fundus Photography in Human and Animal Eyes

    National Research Council Canada - National Science Library

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily...

  1. Normal color variations of the canine ocular fundus, a retrospective study in Swedish dogs

    National Research Council Canada - National Science Library

    Granar, Marie I K S; Nilsson, Bo R; Hamberg-Nyström, Helene L

    2011-01-01

    A retrospective study was made to demonstrate normal variations of the color and size of the tapetal area and color of the nontapetal area in the ocular fundus in dogs, correlating them to breed, age and coat color...

  2. Microchannel plate streak camera

    Science.gov (United States)

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  3. Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography.

    Science.gov (United States)

    Hu, Zhihong; Niemeijer, Meindert; Abràmoff, Michael D; Garvin, Mona K

    2012-10-01

    Segmenting retinal vessels in optic nerve head (ONH) centered spectral-domain optical coherence tomography (SD-OCT) volumes is particularly challenging due to the projected neural canal opening (NCO) and relatively low visibility in the ONH center. Color fundus photographs provide a relatively high vessel contrast in the region inside the NCO, but have not been previously used to aid the SD-OCT vessel segmentation process. Thus, in this paper, we present two approaches for the segmentation of retinal vessels in SD-OCT volumes that each take advantage of complimentary information from fundus photographs. In the first approach (referred to as the registered-fundus vessel segmentation approach), vessels are first segmented on the fundus photograph directly (using a k-NN pixel classifier) and this vessel segmentation result is mapped to the SD-OCT volume through the registration of the fundus photograph to the SD-OCT volume. In the second approach (referred to as the multimodal vessel segmentation approach), after fundus-to-SD-OCT registration, vessels are simultaneously segmented with a k -NN classifier using features from both modalities. Three-dimensional structural information from the intraretinal layers and neural canal opening obtained through graph-theoretic segmentation approaches of the SD-OCT volume are used in combination with Gaussian filter banks and Gabor wavelets to generate the features. The approach is trained on 15 and tested on 19 randomly chosen independent image pairs of SD-OCT volumes and fundus images from 34 subjects with glaucoma. Based on a receiver operating characteristic (ROC) curve analysis, the present registered-fundus and multimodal vessel segmentation approaches [area under the curve (AUC) of 0.85 and 0.89, respectively] both perform significantly better than the two previous OCT-based approaches (AUC of 0.78 and 0.83, p < 0.05). The multimodal approach overall performs significantly better than the other three approaches (p < 0.05).

  4. A novel image recuperation approach for diagnosing and ranking retinopathy disease level using diabetic fundus image.

    Science.gov (United States)

    Krishnamoorthy, Somasundaram; Alli, P

    2015-01-01

    Retinal fundus images are widely used in diagnosing and providing treatment for several eye diseases. Prior works using retinal fundus images detected the presence of exudation with the aid of publicly available dataset using extensive segmentation process. Though it was proved to be computationally efficient, it failed to create a diabetic retinopathy feature selection system for transparently diagnosing the disease state. Also the diagnosis of diseases did not employ machine learning methods to categorize candidate fundus images into true positive and true negative ratio. Several candidate fundus images did not include more detailed feature selection technique for diabetic retinopathy. To apply machine learning methods and classify the candidate fundus images on the basis of sliding window a method called, Diabetic Fundus Image Recuperation (DFIR) is designed in this paper. The initial phase of DFIR method select the feature of optic cup in digital retinal fundus images based on Sliding Window Approach. With this, the disease state for diabetic retinopathy is assessed. The feature selection in DFIR method uses collection of sliding windows to obtain the features based on the histogram value. The histogram based feature selection with the aid of Group Sparsity Non-overlapping function provides more detailed information of features. Using Support Vector Model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases. The ranking of disease level for each candidate set provides a much promising result for developing practically automated diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, specificity rate, ranking efficiency and feature selection time.

  5. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  6. Segmentation of choroidal neovascularization in fundus fluorescein angiograms.

    Science.gov (United States)

    Abdelmoula, Walid M; Shah, Syed M; Fahmy, Ahmed S

    2013-05-01

    Choroidal neovascularization (CNV) is a common manifestation of age-related macular degeneration (AMD). It is characterized by the growth of abnormal blood vessels in the choroidal layer causing blurring and deterioration of the vision. In late stages, these abnormal vessels can rupture the retinal layers causing complete loss of vision at the affected regions. Determining the CNV size and type in fluorescein angiograms is required for proper treatment and prognosis of the disease. Computer-aided methods for CNV segmentation is needed not only to reduce the burden of manual segmentation but also to reduce inter- and intraobserver variability. In this paper, we present a framework for segmenting CNV lesions based on parametric modeling of the intensity variation in fundus fluorescein angiograms. First, a novel model is proposed to describe the temporal intensity variation at each pixel in image sequences acquired by fluorescein angiography. The set of model parameters at each pixel are used to segment the image into regions of homogeneous parameters. Preliminary results on datasets from 21 patients with Wet-AMD show the potential of the method to segment CNV lesions in close agreement with the manual segmentation.

  7. Fundus Autofluorescence in Age-related Macular Degeneration

    Science.gov (United States)

    Ly, Angelica; Nivison-Smith, Lisa; Assaad, Nagi; Kalloniatis, Michael

    2017-01-01

    ABSTRACT Fundus autofluorescence (FAF) provides detailed insight into the health of the retinal pigment epithelium (RPE). This is highly valuable in age-related macular degeneration (AMD) as RPE damage is a hallmark of the disease. The purpose of this paper is to critically appraise current clinical descriptions regarding the appearance of AMD using FAF and to integrate these findings into a chair-side reference. A wide variety of FAF patterns have been described in AMD, which is consistent with the clinical heterogeneity of the disease. In particular, FAF imaging in early to intermediate AMD has the capacity to reveal RPE alterations in areas that appear normal on funduscopy, which aids in the stratification of cases and may have visually significant prognostic implications. It can assist in differential diagnoses and also represents a reliable, sensitive method for distinguishing reticular pseudodrusen. FAF is especially valuable in the detection, evaluation, and monitoring of geographic atrophy and has been used as an endpoint in clinical trials. In neovascular AMD, FAF reveals distinct patterns of classic choroidal neovascularization noninvasively and may be especially useful for determining which eyes are likely to benefit from therapeutic intervention. FAF represents a rapid, effective, noninvasive imaging method that has been underutilized, and incorporation into the routine assessment of AMD cases should be considered. However, the practicing clinician should also be aware of the limitations of the modality, such as in the detection of foveal involvement and in the distinction of phenotypes (hypo-autofluorescent drusen from small areas of geographic atrophy). PMID:27668639

  8. Techniques of Glaucoma Detection From Color Fundus Images: A Review

    Directory of Open Access Journals (Sweden)

    Malaya Kumar Nath

    2012-09-01

    Full Text Available Glaucoma is a generic name for a group of diseases which causes progressive optic neuropathy and vision loss due to degeneration of the optic nerves. Optic nerve cells act as transducer and convert light signal entered into the eye to electrical signal for visual processing in the brain. The main risk factors of glaucoma are elevated intraocular pressure exerted by aqueous humour, family history of glaucoma (hereditary and diabetes. It causes damages to the eye, whether intraocular pressure is high, normal or below normal. It causes the peripheral vision loss. There are different types of glaucoma. Some glaucoma occurs suddenly. So, detection of glaucoma is essential for minimizing the vision loss. Increased cup area to disc area ratio is the significant change during glaucoma. Diagnosis of glaucoma is based on measurement of intraocular pressure by tonometry, visual field examination by perimetry and measurement of cup area to disc area ratio from the color fundus images. In this paper the different signal processing techniques are discussed for detection and classification of glaucoma.

  9. Interactive segmentation for geographic atrophy in retinal fundus images.

    Science.gov (United States)

    Lee, Noah; Smith, R Theodore; Laine, Andrew F

    2008-10-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12-21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of pathological images still remains an unsolved problem. In this paper we leverage the watershed transform and generalized non-linear gradient operators for interactive segmentation and present an intuitive and simple approach for geographic atrophy segmentation. We compare our approach with the state of the art random walker [5] algorithm for interactive segmentation using ROC statistics. Quantitative evaluation experiments on 100 FAF images show a mean sensitivity/specificity of 98.3/97.7% for our approach and a mean sensitivity/specificity of 88.2/96.6% for the random walker algorithm.

  10. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  11. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  12. CCD Luminescence Camera

    Science.gov (United States)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  13. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... to establish analysis as a continued, iterative movement of transcultural dialogue and critique....

  14. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  15. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  16. Dry imaging cameras

    Directory of Open Access Journals (Sweden)

    I K Indrajit

    2011-01-01

    Full Text Available Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  17. Laparoscopic resection of submucosal tumor on posterior wall of gastric fundus

    Institute of Scientific and Technical Information of China (English)

    Zhong-Wei Ke; Cheng-Zhu Zheng; Ming-Gen Hu; Dan-Lei Chen

    2004-01-01

    AIM: Laparoscopic resection of tumors on the posterior wall of gastric fundus, especially when they are next to the esophagocardiac junction (ECJ), is both difficult and timeconsuming. Furthermore, it can lead to inadvertent esophagus stenosis and injury to the spleen. In order to overcome these difficulties, laparoscopically extraluminal resection of gastric fundus was designed to manage submucosal tumors located on the posterior wall of gastric fundus and next to ECJ.METHODS: From January 2001 to September 2003,laparoscopically extraluminal resection of gastric fundus was successfully carried out on 15 patients. There were11 males and 4 females with an average age of 58 years(range, 38 to 78 years). The mean diameter of the tumors was 4.8 cm. The distance of the tumor border from ECJ was about 1.5-2.5 cm. The four-portal operation procedures were as follows: localization of the tumor, dissection of the omentum, mobilization of the gastric fundus and the upper polar of spleen, exposure of ECJ, and resection of the gastric fundus with Endo GIA.RESULTS: The laparoscopic operation time averaged(66.2±10.4) min, the average amount of bleeding was(89.4±21.7) mL. The mean post-operative hospital stay was (5.3±1.1) d. Within 36 h post-operation, 73.3% of all the patients recovered their gastrointestinal function and began to eat something and to walk. In all the operations,no apparent tumor focus was left and no complication or conversion to open surgery occurred.CONCLUSION: Our newly designed procedure,laparoscopically extraluminal resection of the gastric fundus, can avoid contamination of the abdominal cavity,injury to the spleen and esophageal stenosis. The procedure seems to be both safe and effective.

  18. Variance Owing to Observer, Repeat Imaging, and Fundus Camera Type on Cup-to-disc Ratio Estimates by Stereo Planimetry

    NARCIS (Netherlands)

    Kwon, Young H.; Adix, Michael; Zimmerman, M. Bridget; Piette, Scott; Greenlee, Emily C.; Alward, Wallace L. M.; Abramoff, M.D.

    2009-01-01

    Objective: To determine and compare variance components in linear cup-to-disc ratio (LCDR) estimates by computer-assisted planimetry by human experts, and automated machine algorithm (digital automated planimetry). Design: Prospective case series for evaluation of planimetry.

  19. Variance Owing to Observer, Repeat Imaging, and Fundus Camera Type on Cup-to-disc Ratio Estimates by Stereo Planimetry

    NARCIS (Netherlands)

    Kwon, Young H.; Adix, Michael; Zimmerman, M. Bridget; Piette, Scott; Greenlee, Emily C.; Alward, Wallace L. M.; Abramoff, M.D.

    2009-01-01

    Objective: To determine and compare variance components in linear cup-to-disc ratio (LCDR) estimates by computer-assisted planimetry by human experts, and automated machine algorithm (digital automated planimetry). Design: Prospective case series for evaluation of planimetry. Participants: F

  20. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Automated Brightness and Contrast Adjustment of Color Fundus Photographs for the Grading of Age-Related Macular Degeneration

    Science.gov (United States)

    Tsikata, Edem; Laíns, Inês; Gil, João; Marques, Marco; Brown, Kelsey; Mesquita, Tânia; Melo, Pedro; da Luz Cachulo, Maria; Kim, Ivana K.; Vavvas, Demetrios; Murta, Joaquim N.; Miller, John B.; Silva, Rufino; Miller, Joan W.; Chen, Teresa C.; Husain, Deeba

    2017-01-01

    Purpose The purpose of this study was to develop an algorithm to automatically standardize the brightness, contrast, and color balance of digital color fundus photographs used to grade AMD and to validate this algorithm by determining the effects of the standardization on image quality and disease grading. Methods Seven-field color photographs of patients (>50 years) with any stage of AMD and a control group were acquired at two study sites, with either the Topcon TRC-50DX or Zeiss FF-450 Plus cameras. Field 2 photographs were analyzed. Pixel brightness values in the red, green, and blue (RGB) color channels were adjusted in custom-built software to make the mean brightness and contrast of the images equal to optimal values determined by the Age-Related Eye Disease Study (AREDS) 2 group. Results Color photographs of 370 eyes were analyzed. We found a wide range of brightness and contrast values in the images at baseline, even for those taken with the same camera. After processing, image brightness variability (brightest image–dimmest image in a color channel) was reduced 69-fold, 62-fold, and 96-fold for the RGB channels. Contrast variability was reduced 6-fold, 8-fold, and 13-fold, respectively, after adjustment. Of the 23% images considered nongradable before adjustment, only 5.7% remained nongradable. Conclusions This automated software enables rapid and accurate standardization of color photographs for AMD grading. Translational Relevance This work offers the potential to be the future of assessing and grading AMD from photos for clinical research and teleimaging.

  2. Do Speed Cameras Reduce Collisions?

    OpenAIRE

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not indepe...

  3. Do speed cameras reduce collisions?

    Science.gov (United States)

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  4. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  5. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  6. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  7. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  8. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  9. Wnt/β-catenin promotes gastric fundus specification in mice and humans.

    Science.gov (United States)

    McCracken, Kyle W; Aihara, Eitaro; Martin, Baptiste; Crawford, Calyn M; Broda, Taylor; Treguier, Julie; Zhang, Xinghao; Shannon, John M; Montrose, Marshall H; Wells, James M

    2017-01-12

    Despite the global prevalence of gastric disease, there are few adequate models in which to study the fundus epithelium of the human stomach. We differentiated human pluripotent stem cells (hPSCs) into gastric organoids containing fundic epithelium by first identifying and then recapitulating key events in embryonic fundus development. We found that disruption of Wnt/β-catenin signalling in mouse embryos led to conversion of fundic to antral epithelium, and that β-catenin activation in hPSC-derived foregut progenitors promoted the development of human fundic-type gastric organoids (hFGOs). We then used hFGOs to identify temporally distinct roles for multiple signalling pathways in epithelial morphogenesis and differentiation of fundic cell types, including chief cells and functional parietal cells. hFGOs are a powerful model for studying the development of the human fundus and the molecular bases of human gastric physiology and pathophysiology, and also represent a new platform for drug discovery.

  10. [New Approach of Fundus Image Segmentation Evaluation Based on Topology Structure].

    Science.gov (United States)

    Sheng, Hanwei; Dai, Peishan; Liu, Zhihang; Zhang-Wen, Miaoyun; Zhao, Yali; Fan, Min

    2015-10-01

    In view of the evaluation of fundus image segmentation, a new evaluation method was proposed to make up insufficiency of the traditional evaluation method which only considers the overlap of pixels and neglects topology structure of the retinal vessel. Mathematical morphology and thinning algorithm were used to obtain the retinal vascular topology structure. Then three features of retinal vessel, including mutual information, correlation coefficient and ratio of nodes, were calculated. The features of the thinned images taken as topology structure of blood vessel were used to evaluate retinal image segmentation. The manually-labeled images and their eroded ones of STARE database were used in the experiment. The result showed that these features, including mutual information, correlation coefficient and ratio of nodes, could be used to evaluate the segmentation quality of retinal vessel on fundus image through topology structure, and the algorithm was simple. The method is of significance to the supplement of traditional segmentation evaluation of retinal vessel on fundus image.

  11. Fundus autofluorescence and optical coherence tomography findings in thiamine responsive megaloblastic anemia.

    Science.gov (United States)

    Ach, Thomas; Kardorff, Rüdiger; Rohrschneider, Klaus

    2015-01-01

    To report ophthalmologic fundus autofluorescence and spectral domain optical coherence tomography findings in a patient with thiamine responsive megaloblastic anemia (TRMA). A 13-year-old girl with genetically proven TRMA was ophthalmologically (visual acuity, funduscopy, perimetry, electroretinogram) followed up over >5 years. Fundus imaging also included autofluorescence and spectral domain optical coherence tomography. During a 5-year follow-up, visual acuity and visual field decreased, despite a special TRMA diet. Funduscopy revealed bull's eye appearance, whereas fundus autofluorescence showed central and peripheral hyperfluorescence and perifoveal hypofluorescence. Spectral domain optical coherence tomography revealed affected inner segment ellipsoid band and irregularities in the retinal pigment epithelium and choroidea. Autofluorescence and spectral domain optical coherence tomography findings in a patient with TRMA show retinitis pigmentosa-like retina, retinal pigment epithelium, and choroid alterations. These findings might progress even under special TRMA diet, indispensable to life. Ophthalmologist should consider TRMA in patients with deafness and ophthalmologic disorders.

  12. Ocular fundus pathology and chronic kidney disease in a Chinese population

    Directory of Open Access Journals (Sweden)

    Gao Bixia

    2011-11-01

    Full Text Available Abstract Background Previous study indicated a high prevalence of ocular fundus pathology among patients with chronic kidney disease (CKD, while the relationship between them has never been explored in a Chinese Population. Methods This cross-sectional study included 9 670 participants enrolled in a medical screening program. Ocular fundus examination was performed by ophthalmologists using ophthalmoscopes. The presence of eGFR less than 60 mL/min/1.73 m2 and/or proteinuria was defined as CKD. Results Compared to participants without CKD, participants with CKD had higher prevalence of retinopathy (28.5% vs. 16.3%, P Conclusions Ocular fundus pathology is common among Chinese patients with CKD. Regular eye exam among persons with proteinuria is warranted.

  13. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image

    Science.gov (United States)

    Safitri, Diah Wahyu; Juniati, Dwi

    2017-08-01

    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  14. EFFECT OF FUNDUS PIGMENT ON RESPONSE OF RABBIT RETINA TO TRANSPUPILLARY THERMOTHERAPY

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Objective To study the effect of fundus pigment on the response of the retina to transpupillary thermotherapy (TTT). Methods The retina were irradiated with 810 nm diode laser in 16 eyes of 8 pigmented rabbits and 12 eyes of 6 albino rabbits. The spot size was 1.2 mm; the duration was 60 s; and powers were 50, 80, 150 and 300 mW for pigmented rabbits and 800, 1 200 and 1 500mW for albino rabbits. All of the eyes were followed up with ophthalmolscope. The fundus was photographed and examined histologically with optic microscope immediately and 1 month after TTT respectively. Results The changes of the fundus and the histological examination were not significant immediately and 1 month after TTT in 50 mW group of pigmented rabbit and 800 mW of albino rabbit. Grey spot on the retina was observed on the fundus in 80 mW group of pigmented rabbit and 1 200 mW of albino rabbit immediately after TTT. The structure of the retina remained intact and subretinal fluid was observed histologically. Grey spot was still visible on the fundus, though the fluid was absorbed after 1 month. As the power of diode laser was increased to 150 mW for pigmented rabbits and 1500 mW for albino rabbit, fundus white spots were observed and the outer retina was destroyed while photoreceptors existed immediately after TTT. Pigmentation was found in white lesions and the fibrous proliferation was found in choroid 1 month after TTT. Prominent white spot was seen on the fundus immediately after laser irradiation of 300 mW in pigmented rabbits and the structure of the retina was obscured. One month after TTT, dense pigmentation appeared at laser lesions. The retina was thinner. There was prominent fibrous proliferation in the choroid. Conclusion The fundus pigment seems to play an important role in the response of the retina to TTT. The reaction of the retina is in proportion to the intensity of laser.

  15. Uncooled radiometric camera performance

    Science.gov (United States)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  16. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  17. Neutron counting with cameras

    Energy Technology Data Exchange (ETDEWEB)

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo [Institut Laue Langevin, Grenoble (France)

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  18. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  19. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  20. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  1. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  2. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  3. Ocular Fundus Photography as a Tool to Study Stroke and Dementia.

    Science.gov (United States)

    Cheung, Carol Y; Chen, Christopher; Wong, Tien Y

    2015-10-01

    Although cerebral small vessel disease has been linked to stroke and dementia, due to limitations of current neuroimaging technology, direct in vivo visualization of changes in the cerebral small vessels (e.g., cerebral arteriolar narrowing, tortuous microvessels, blood-brain barrier damage, capillary microaneurysms) is difficult to achieve. As the retina and the brain share similar embryological origin, anatomical features, and physiologic properties with the cerebral small vessels, the retinal vessels offer a unique and easily accessible "window" to study the correlates and consequences of cerebral small vessel diseases in vivo. The retinal microvasculature can be visualized, quantified and monitored noninvasively using ocular fundus photography. Recent clinic- and population-based studies have demonstrated a close link between retinal vascular changes seen on fundus photography and stroke and dementia, suggesting that ocular fundus photography may provide insights to the contribution of microvascular disease to stroke and dementia. In this review, we summarize current knowledge on retinal vascular changes, such as retinopathy and changes in retinal vascular measures with stroke and dementia as well as subclinical makers of cerebral small vessel disease, and discuss the possible clinical implications of these findings in neurology. Studying pathologic changes of retinal blood vessels may be useful for understanding the etiology of various cerebrovascular conditions; hence, ocular fundus photography can be potentially translated into clinical practice.

  4. An indirect action of dopamine on the rat fundus strip mediated by 5-hydroxytryptamine

    NARCIS (Netherlands)

    Sonneville, P.F.

    Dopamine in a concentration of 10−7 molar produces a contraction of the rat stomach fundus preparation. This effect is blocked by the 5-HT antagonist methysergide. Repeated exposure to dopamine results in tachyphylaxis, but the sensitivity to dopamine can be restored by incubating the tissue with

  5. Visual stimulus-induced changes in human near-infrared fundus reflectance

    NARCIS (Netherlands)

    Abramoff, M.D.; Kwon, Y.H.; Ts’o, D.; Soliz, P.; Zimmerman, B.; Pokorny, J.; Kardon, R.

    2006-01-01

    PURPOSE. Imaging studies from anesthetized feline, primate, and human retinas have revealed near-infrared fundus reflectance changes induced by visible light stimulation. In the present study, the spatial and temporal properties of similar changes were characterized in normal, awake humans. METHODS.

  6. Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs.

    NARCIS (Netherlands)

    Niemeijer, M.; Ginneken, B. van; Cree, M.J.; Mizutani, A.; Quellec, G.; Sanchez, C.I.; Zhang, B.; Hornero, R.; Lamard, M.; Muramatsu, C.; Wu, X.; Cazuguel, G.; You, J.; Mayo, A.; Li, Q.; Hatanaka, Y.; Cochener, B.; Roux, C.; Karray, F.; Garcia, M.; Fujita, H.; Abramoff, M.D.

    2010-01-01

    The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each

  7. An indirect action of dopamine on the rat fundus strip mediated by 5-hydroxytryptamine

    NARCIS (Netherlands)

    Sonneville, P.F.

    1968-01-01

    Dopamine in a concentration of 10−7 molar produces a contraction of the rat stomach fundus preparation. This effect is blocked by the 5-HT antagonist methysergide. Repeated exposure to dopamine results in tachyphylaxis, but the sensitivity to dopamine can be restored by incubating the tissue with 5-

  8. High K+-Induced Relaxation by Nitric Oxide in Human Gastric Fundus

    Science.gov (United States)

    Kim, Dae Hoon; Choi, Woong; Sung, Rohyun; Kim, Hun Sik; Kim, Heon; Yoo, Ra Young; Park, Seon-Mee; Yun, Sei Jin; Song, Young-Jin; Xu, Wen-Xie; Lee, Sang Jin

    2012-01-01

    This study was designed to elucidate high K+-induced relaxation in the human gastric fundus. Circular smooth muscle from the human gastric fundus greater curvature showed stretch-dependent high K+ (50 mM)-induced contractions. However, longitudinal smooth muscle produced stretch-dependent high K+-induced relaxation. We investigated several relaxation mechanisms to understand the reason for the discrepancy. Protein kinase inhibitors such as KT 5823 (1 µM) and KT 5720 (1 µM) which block protein kinases (PKG and PKA) had no effect on high K+-induced relaxation. K+ channel blockers except 4-aminopyridine (4-AP), a voltage-dependent K+ channel (KV) blocker, did not affect high K+-induced relaxation. However, N(G)-nitro-L-arginine and 1H-(1,2,4)oxadiazolo (4,3-A)quinoxalin-1-one, an inhibitors of soluble guanylate cyclase (sGC) and 4-AP inhibited relaxation and reversed relaxation to contraction. High K+-induced relaxation of the human gastric fundus was observed only in the longitudinal muscles from the greater curvature. These data suggest that the longitudinal muscle of the human gastric fundus greater curvature produced high K+-induced relaxation that was activated by the nitric oxide/sGC pathway through a KV channel-dependent mechanism. PMID:23118553

  9. Automatic Drusen Quantification and Risk Assessment of Age-related Macular Degeneration on Color Fundus Images

    NARCIS (Netherlands)

    Grinsven, M.J.J.P. van; Lechanteur, Y.T.E.; Ven, J.P.H. van de; Ginneken, B. van; Hoyng, C.B.; Theelen, T.; Sanchez, C.I.

    2013-01-01

    PURPOSE: To evaluate a machine learning algorithm that allows for computer aided diagnosis (CAD) of non-advanced age-related macular degeneration (AMD) by providing an accurate detection and quantification of drusen location, area and size. METHODS: Color fundus photographs of 407 eyes without AMD

  10. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  11. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  12. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  13. Role of M1 receptor in regulation of gastric fundus smooth muscle contraction

    Directory of Open Access Journals (Sweden)

    Marta Gajdus

    2011-09-01

    Full Text Available Background:The subject of this study is determination of the influence of drugs on gastric fundus smooth muscle contraction induced by activation of muscarinic receptors M1. Experiments tested interactions between a receptor agonist, carbachol and muscarinic receptor antagonists, atropine and pirenzepine.Material/Methods:Testing was conducted on tissues isolated from rat’s stomach. Male Wistar rats with weight between 220 g and 360 g were anesthetized by intraperitoneal injection of urethane (120 mg/kg. The stomach was dissected, and later the gastric fundus was isolated. Tissue was placed in a dish for insulated organs with 20 ml in capacity, filled with Krebs fluid. Results contained in the study are average values ± SE. In order to determine statistical significance, the principles of receptor theory were used (Kenakin modification.Results:According to tests, carbachol, in concentrations ranging between 10–8 M to 10–4 M, in a dosage-dependent way induces gastric fundus smooth muscle contraction. Presented results indicate that carbachol meets the conditions posed to full agonists. On the other hand, atropine, a non-selective muscarinic receptor antagonist, causes a concentration-dependent shift of concentration-effect curve (for carbachol to the right, maintaining maximum reaction. According to analysis of the curve determined, we can deduce that atropine meets the conditions posed to competitive antagonists. The use of pirenzepine, a competitive receptor agonist M1, causes shift of concentration-effect curve (for carbachol to the right, maintaining maximum reaction.Conclusions:From the testing conducted on the preparation of the gastric fundus we can deduce that atropine causes shift of concentration-effect curves for carbachol to the right. A similar effect is released by pirenzepine, selectively blocking muscarinic receptors of M1 type. The results indicate that in the preparation of the gastric fundus smooth muscle, M1 type

  14. Role of M1 receptor in regulation of gastric fundus smooth muscle contraction.

    Science.gov (United States)

    Gajdus, Marta; Szadujkis-Szadurska, Katarzyna; Szadujkis-Szadurski, Leszek; Glaza, Izabela; Szadujkis-Szadurski, Rafał; Olkowska, Joanna

    2011-09-14

    The subject of this study is determination of the influence of drugs on gastric fundus smooth muscle contraction induced by activation of muscarinic receptors M1. Experiments tested interactions between a receptor agonist, carbachol and muscarinic receptor antagonists, atropine and pirenzepine. Testing was conducted on tissues isolated from rat's stomach. Male Wistar rats with weight between 220 g and 360 g were anesthetized by intraperitoneal injection of urethane (120 mg/kg). The stomach was dissected, and later the gastric fundus was isolated. Tissue was placed in a dish for insulated organs with 20 ml in capacity, filled with Krebs fluid. Results contained in the study are average values ± SE. In order to determine statistical significance, the principles of receptor theory were used (Kenakin modification). According to tests, carbachol, in concentrations ranging between 10(-8) M to 10(-4) M, in a dosage-dependent way induces gastric fundus smooth muscle contraction. Presented results indicate that carbachol meets the conditions posed to full agonists. On the other hand, atropine, a non-selective muscarinic receptor antagonist, causes a concentration-dependent shift of concentration-effect curve (for carbachol) to the right, maintaining maximum reaction. According to analysis of the curve determined, we can deduce that atropine meets the conditions posed to competitive antagonists. The use of pirenzepine, a competitive receptor agonist M1, causes shift of concentration-effect curve (for carbachol) to the right, maintaining maximum reaction. From the testing conducted on the preparation of the gastric fundus we can deduce that atropine causes shift of concentration-effect curves for carbachol to the right. A similar effect is released by pirenzepine, selectively blocking muscarinic receptors of M1 type. The results indicate that in the preparation of the gastric fundus smooth muscle, M1 type receptors occur also postsynaptically.

  15. Subretinal Fibrosis in Stargardt’s Disease with Fundus Flavimaculatus and ABCA4 Gene Mutation

    Directory of Open Access Journals (Sweden)

    Settimio Rossi

    2012-12-01

    Full Text Available Purpose: To report on 4 patients affected by Stargardt’s disease (STGD with fundus flavimaculatus (FFM and ABCA4 gene mutation associated with subretinal fibrosis. Methods: Four patients with a diagnosis of STGD were clinically examined. All 4 cases underwent a full ophthalmologic evaluation, including best-corrected visual acuity measured by the Snellen visual chart, biomicroscopic examination, fundus examination, fundus photography, electroretinogram, microperimetry, optical coherence tomography and fundus autofluorescence. All patients were subsequently screened for ABCA4 gene mutations, identified by microarray genotyping and confirmed by conventional DNA sequencing of the relevant exons. Results: In all 4 patients, ophthalmologic exam showed areas of subretinal fibrosis in different retinal sectors. In only 1 case, these lesions were correlated to an ocular trauma as confirmed by biomicroscopic examination of the anterior segment that showed a nuclear cataract dislocated to the superior site and vitreous opacities along the lens capsule. The other patients reported a lifestyle characterized by competitive sport activities. The performed instrumental diagnostic investigations confirmed the diagnosis of STGD with FFM in all patients. Moreover, in all 4 affected individuals, mutations in the ABCA4 gene were found. Conclusions: Patients with the diagnosis of STGD associated with FFM can show atypical fundus findings. We report on 4 patients affected by STGD with ABCA4 gene mutation associated with subretinal fibrosis. Our findings suggest that this phenomenon can be accelerated by ocular trauma and also by ocular microtrauma caused by sport activities, highlighting that lifestyle can play a role in the onset of these lesions.

  16. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  17. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  18. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  19. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  20. Diabetic Retinopathy Screening Using Telemedicine Tools: Pilot Study in Hungary

    Science.gov (United States)

    Eszes, Dóra J.; Szabó, Dóra J.; Russell, Greg; Kirby, Phil; Paulik, Edit; Nagymajtényi, László

    2016-01-01

    Introduction. Diabetic retinopathy (DR) is a sight-threatening complication of diabetes. Telemedicine tools can prevent blindness. We aimed to investigate the patients' satisfaction when using such tools (fundus camera examination) and the effect of demographic and socioeconomic factors on participation in screening. Methods. Pilot study involving fundus camera screening and self-administered questionnaire on participants' experience during fundus examination (comfort, reliability, and future interest in participation), as well as demographic and socioeconomic factors was performed on 89 patients with known diabetes in Csongrád County, a southeastern region of Hungary. Results. Thirty percent of the patients had never participated in any ophthalmological screening, while 25.7% had DR of some grade based upon a standard fundus camera examination and UK-based DR grading protocol (Spectra™ software). Large majority of the patients were satisfied with the screening and found it reliable and acceptable to undertake examination under pupil dilation; 67.3% were willing to undergo nonmydriatic fundus camera examination again. There was a statistically significant relationship between economic activity, education and marital status, and future interest in participation. Discussion. Participants found digital retinal screening to be reliable and satisfactory. Telemedicine can be a strong tool, supporting eye care professionals and allowing for faster and more comfortable DR screening. PMID:28078306

  1. Diabetic Retinopathy Screening Using Telemedicine Tools: Pilot Study in Hungary

    Directory of Open Access Journals (Sweden)

    Dóra J. Eszes

    2016-01-01

    Full Text Available Introduction. Diabetic retinopathy (DR is a sight-threatening complication of diabetes. Telemedicine tools can prevent blindness. We aimed to investigate the patients’ satisfaction when using such tools (fundus camera examination and the effect of demographic and socioeconomic factors on participation in screening. Methods. Pilot study involving fundus camera screening and self-administered questionnaire on participants’ experience during fundus examination (comfort, reliability, and future interest in participation, as well as demographic and socioeconomic factors was performed on 89 patients with known diabetes in Csongrád County, a southeastern region of Hungary. Results. Thirty percent of the patients had never participated in any ophthalmological screening, while 25.7% had DR of some grade based upon a standard fundus camera examination and UK-based DR grading protocol (Spectra™ software. Large majority of the patients were satisfied with the screening and found it reliable and acceptable to undertake examination under pupil dilation; 67.3% were willing to undergo nonmydriatic fundus camera examination again. There was a statistically significant relationship between economic activity, education and marital status, and future interest in participation. Discussion. Participants found digital retinal screening to be reliable and satisfactory. Telemedicine can be a strong tool, supporting eye care professionals and allowing for faster and more comfortable DR screening.

  2. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  3. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  4. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  5. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  6. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  7. Prevalencia de retinopatía diabética mediante cribado con retinógrafo no midriático Prevalence of diabetic retinopathy using non-mydriatic retinography

    Directory of Open Access Journals (Sweden)

    A. Gibelalde

    2010-12-01

    Full Text Available Objetivo: Estudiar la prevalencia de retinopatía diabética mediante retinógrafo no midriático y valorar su utilidad como método de cribaje en la comarca de Donostialdea. Métodos: Se realizó un estudio prospectivo incluyendo 2.444 pacientes diabéticos derivados por su médico de atención primaria y/o endocrinólogo. Se realizó una retinografía con cámara no midriática en los 45 grados centrales, agudeza visual y tonómetro de no contacto en todos los pacientes. La información se derivó al hospital para ser evaluado por un oftalmólogo especialista en retina. Resultados: El 15,02% de los pacientes eran diabéticos en tratamiento dietético, el 62,55% eran diabéticos no insulinodependientes y el 22,43% eran diabéticos insulinodependientes. Observamos una prevalencia de retinopatía diabética del 9,36%. El 5,27% presentó retinopatía diabética no proliferativa (RDNP leve, el 2,21% RDNP moderada, el 1,67% RDNP severa y el 0,12% RD proliferativa. El 8,22% presentó hipertensión ocular. Conclusiones: Se observa una baja prevalencia de RD en los pacientes de nuestra muestra. La utilización de telemedicina con cámara no midriática es un arma importante para el diagnóstico precoz de la retinopatía diabética y puede aplicarse a otras patologías oftalmológicas como el glaucoma.Objectives: To establish the prevalence of diabetic retinopathy (RD diagnosed after non-mydriatic retinography and to evaluate its utility as a screening test in the area of San Sebastián. Methods: A prospective study including 2,444 diabetic patients sent by their primary attention doctors and/or their endocrinologists. All patients underwent non-mydriatic retinography in the central 45 degrees; visual acuity was explored, as well as IOP through non-contact tonometry. The retinographies and information obtained were sent to our hospital and were revised by an ophthalmologist from the Retina Department. Results: Fifteen point zero two (15.02% of the

  8. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report

    Directory of Open Access Journals (Sweden)

    Muhammet Kazim Erol

    2013-06-01

    Full Text Available The authors present enhanced depth imaging optical coherence tomography (EDI OCT and fundus autofluorescence (FAF characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma.

  9. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report.

    Science.gov (United States)

    Erol, Muhammet Kazim; Coban, Deniz Turgut; Ceran, Basak Bostanci; Bulut, Mehmet

    2013-01-01

    The authors present enhanced depth imaging optical coherence tomography (EDI OCT) and fundus autofluorescence (FAF) characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma.

  10. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report

    Energy Technology Data Exchange (ETDEWEB)

    Erol, Muhammet Kazim; Coban, Deniz Turgut; Ceran, Basak Bostanci; Bulut, Mehmet, E-mail: muhammetkazimerol@gmail.com [Kazim Erol. Antalya Training and Research Hospital, Ophthalmology Department, Antalya (Turkey)

    2013-11-01

    The authors present enhanced depth imaging optical coherence tomography (EDI OCT) and fundus autofluorescence (FAF) characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma. (author)

  11. Ultrawide-field fundus photography of the first reported case of gyrate atrophy from Australia.

    Science.gov (United States)

    Moloney, Thomas P; O'Hagan, Stephen; Lee, Lawrence

    2014-01-01

    Gyrate atrophy of the choroid and retina is a rare chorioretinal dystrophy inherited in an autosomal recessive pattern. We describe the first documented case of gyrate atrophy from Australia in a 56-year-old woman with a history of previous diagnosis of retinitis pigmentosa and worsening night vision in her right eye over several years. She was myopic and bilaterally pseudophakic, and fundus examination revealed pale optic discs and extensive peripheral chorioretinal atrophy exposing bare sclera bilaterally with only small islands of normal-appearing retina at each posterior pole. Visual field testing showed grossly constricted fields, blood testing showed hyperornithinemia, and further questioning revealed consanguinity between the patient's parents. We then used the patient's typical retinal findings of gyrate atrophy to demonstrate the potential use of ultrawide-field fundus photography and angiography in diagnosis and monitoring response in future treatment.

  12. Detection of Glaucomatous Eye via Color Fundus Images Using Fractal Dimensions

    Directory of Open Access Journals (Sweden)

    J. Jan

    2008-09-01

    Full Text Available This paper describes a method for glaucomatous eye detection based on fractal description, followed by classification. Two methods for fractal dimensions estimation, which give a different image/tissue description, are presented. The fundus color images are used, in which the areas with retinal nerve fibers are analyzed. The presented method shows that fractal dimensions can be used as features for retinal nerve fibers losses detection, which is a sign of glaucomatous eye.

  13. Influence of antioxidant depletion on nitrergic relaxation in the pig gastric fundus

    OpenAIRE

    2002-01-01

    The hypothesis that endogenous tissue antioxidants might explain the inability of the superoxide generators 6-anilino-5,8-quinolinedione (LY83583) and hydroquinone (HQ) and of the NO-scavengers hydroxocobalamin (HC) and 2-(4-carboxyphenyl)-4,4,5,5-tetramethylimidazoline-1-oxyl-3-oxide (c-PTIO) to affect nitrergic neurotransmission in the porcine gastric fundus was tested by selective pharmacological depletion of respectively Cu/Zn superoxide dismutase (Cu/Zn SOD) and reduced glutathione (GSH)...

  14. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Method for Calculating the Optical Diffuse Reflection Coefficient for the Ocular Fundus

    Science.gov (United States)

    Lisenko, S. A.; Kugeiko, M. M.

    2016-07-01

    We have developed a method for calculating the optical diffuse reflection coefficient for the ocular fundus, taking into account multiple scattering of light in its layers (retina, epithelium, choroid) and multiple refl ection of light between layers. The method is based on the formulas for optical "combination" of the layers of the medium, in which the optical parameters of the layers (absorption and scattering coefficients) are replaced by some effective values, different for cases of directional and diffuse illumination of the layer. Coefficients relating the effective optical parameters of the layers and the actual values were established based on the results of a Monte Carlo numerical simulation of radiation transport in the medium. We estimate the uncertainties in retrieval of the structural and morphological parameters for the fundus from its diffuse reflectance spectrum using our method. We show that the simulated spectra correspond to the experimental data and that the estimates of the fundus parameters obtained as a result of solving the inverse problem are reasonable.

  16. Effect of lymphadenectomy extent on advanced gastric cancer located in the cardia and fundus

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    AIM: To analyze the prognostic impact of lymphade- nectomy extent in advanced gastric cancer located in the cardia and fundus. METHODS: Two hundred and thirty-six patients with advanced gastric cancer located in the cardia and fundus who underwent D2 curative resection were analyzed retrospectively. Relationships between the numbers of lymph nodes (iNs) dissected and survival was analyzed among different clinical stage subgroups. RESULTS: The 5-year overall survival rate of the entire cohort was 37.5%. Multivariate prognostic variables were total LNs dissected (P<0.0001; or number of negative LNs examined, P<0.0001), number of positive INs (P < 0.0001), T category (P < 0.0001) and tumor size (P=0.015). The greatest survival differences were observed at cutoff values of 20 INs resected for stage II (P = 0.0136), 25 for stage III (P < 0.0001), 30 for stage IV (P = 0.0002), and 15 for all patients (P = 0.0024). Based on the statistically assumed linearity as best fit, linear regression showed a significant survival enhancement based on increasing negative INs for patients of stages III (P = 0.013) and IV (P = 0.035). CONCLUSION: To improve the long-term survival of patients with advanced gastric cancer located in the cardia and fundus, removing at least 20 INs for stage II, 25 INs for stage III, and 30 INs for stage 1V patients during D2 radical dissection is recommended.

  17. Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering

    Science.gov (United States)

    Hayashi, Yoshinori; Nakagawa, Toshiaki; Hatanaka, Yuji; Aoyama, Akira; Kakogawa, Masakatsu; Hara, Takeshi; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes, hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering. The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are reduced by a rule-based method which uses the information of the location and the width of each candidate region. The detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.

  18. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus

    Directory of Open Access Journals (Sweden)

    Koprowski Robert

    2012-06-01

    Full Text Available Abstract Background The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. Methods The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. Results The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. Conclusions This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  19. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus.

    Science.gov (United States)

    Koprowski, Robert; Teper, Sławomir Jan; Węglarz, Beata; Wylęgała, Edward; Krejca, Michał; Wróbel, Zygmunt

    2012-06-22

    The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  20. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  1. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim

    2016-02-01

    Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification.

  2. Digital camera in ophthalmology

    Directory of Open Access Journals (Sweden)

    Ashish Mitra

    2015-01-01

    Full Text Available Ophthalmology is an expensive field and imaging is an indispensable modality in ophthalmology; and in developing countries including India, it is not possible for every ophthalmologist to afford slit-lamp photography unit. We here present our experience of slit-lamp photography using digital camera. Good quality pictures of anterior and posterior segment disorders were captured using readily available devices. It can be a used as a good teaching tool for residents learning ophthalmology and can also be a method to document lesions which at many times is necessary for medicolegal purposes. It's a technique which is simple, inexpensive, and has a short learning curve.

  3. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  4. HONEY -- The Honeywell Camera

    Science.gov (United States)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  5. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  6. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  7. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  8. Agreement of retinal nerve fiber layer defect location between red-free fundus photography and cirrus HD-OCT maps.

    Science.gov (United States)

    Hwang, Young Hoon; Kim, Yong Yeon; Kim, Hwang Ki; Sohn, Yong Ho

    2014-11-01

    To investigate the agreement of angular locations of retinal nerve fiber layer (RNFL) defect margins in glaucomatous eyes by using red-free fundus photographs and Cirrus high-definition optical coherence tomography (OCT) RNFL deviation and thickness maps. We examined 380 RNFL defects that showed clear margins in red-free fundus photographs. The OCT deviation and thickness maps were overlaid on the corresponding red-free fundus photographs. A reference line was drawn between the disc center and the macular center. Lines were also drawn between the optic disc center and the point where the RNFL defect margins crossed the OCT scan circle. The angle between the reference and defect-margin lines defined the angular location of the defect margin. Angular locations of proximal (nearest to the reference) and distal (farthest from the reference) RNFL defect margins on OCT deviation and thickness maps were compared to the locations on red-free fundus photographs. The angular locations of proximal and distal RNFL defect margins on OCT thickness maps showed good agreement with red-free fundus photographs. However, OCT deviation maps showed greater angular locations for both proximal and distal RNFL defect margins compared with red-free fundus photographs, especially in eyes with higher myopia (p < 0.05). Red-free fundus photographs and OCT thickness maps showed good agreement for the RNFL defect margin identification. However, this was not the case for deviation maps, especially in myopic eyes. This finding should be considered when evaluating RNFL defects using OCT maps.

  9. Camera artifacts in IUE spectra

    Science.gov (United States)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  10. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  11. Influence of antioxidant depletion on nitrergic relaxation in the pig gastric fundus.

    Science.gov (United States)

    Colpaert, E E; Timmermans, J-P; Lefebvre, R A

    2002-02-01

    1. The hypothesis that endogenous tissue antioxidants might explain the inability of the superoxide generators 6-anilino-5,8-quinolinedione (LY83583) and hydroquinone (HQ) and of the NO-scavengers hydroxocobalamin (HC) and 2-(4-carboxyphenyl)-4,4,5,5-tetramethylimidazoline-1-oxyl-3-oxide (c-PTIO) to affect nitrergic neurotransmission in the porcine gastric fundus was tested by selective pharmacological depletion of respectively Cu/Zn superoxide dismutase (Cu/Zn SOD) and reduced glutathione (GSH) in circular smooth muscle preparations. 2. Diethyldithiocarbamate (DETCA; 3x10(-3) M), which almost completely abolished tissue Cu/Zn SOD activity, had no effect per se on nitrergic relaxations induced by either electrical field stimulation (EFS; 4 Hz, 10 s) or exogenous nitric oxide (NO; 10(-5) M). In these DETCA-treated tissues however, electrically-induced nitrergic relaxations became sensitive to inhibition by LY83583 (10(-5) M) or HC (10(-4) M), but not by HQ (10(-4) M) or c-PTIO (10(-4) M); only for the combination of DETCA plus LY83583, this inhibition was partially reversed by exogenous Cu/Zn SOD (1000 u ml(-1)). 3. Immunohistochemical analysis of porcine gastric fundus revealed a 100% colocalization of Cu/Zn SOD and neuronal nitric oxide synthase (nNOS) in the intrinsic neurons of the myenteric plexus. 4. Buthionine sulphoximine (BSO; 10(-3) M) in the absence or presence of LY83583 (10(-5) M) or HC (10(-4) M) did not alter nitrergic relaxations, although it reduced per se the tissue GSH content to 62% of control. 5. Pharmacological depletion studies, corroborated by immunohistochemical data, thus suggest a role for Cu/Zn SOD but not for GSH in nitrergic neurotransmission in the porcine gastric fundus.

  12. Noninvasive optoacoustic temperature determination at the fundus of the eye during laser irradiation.

    Science.gov (United States)

    Schule, Georg; Huttmann, Gereon; Framme, Carsten; Roider, Johann; Brinkmann, Ralf

    2004-01-01

    In all fundus laser treatments of the eye, the temperature increase is not exactly known. In order to optimize treatments, an online temperature determination is preferable. We investigated a noninvasive optoacoustic method to monitor the fundus temperature during pulsed laser irradiation. When laser pulses are applied to the fundus, thermoelastic pressure waves are emitted, due to thermal expansion of the heated tissue. Using a constant pulse energy, the amplitude of the pressure wave increases linearly with an increase in the base temperature of between 30 and 80 degrees C. This method was evaluated in vitro on porcine retinal pigment epithelium (RPE) cell samples and clinically during selective RPE treatment with repetitive microsecond laser pulses. During the irradiation of porcine RPE with a neodymium-doped yttrium (Nd:YLF) laser (527 nm, 1.7 micros, 500 Hz repetition rate, 160 mJ/cm(2)) an increase in the base temperature of 30+/-4 degrees C after 100 pulses was found. During patient treatments, a temperature increase of 60+/-11 degrees C after 100 pulses with a 500-Hz repetition rate and 7+/-1 degrees C after 30 pulses with 100 Hz at 520 mJ/cm(2) was found. All measured data were in good agreement with heat diffusion calculations. Optoacoustic methods can be used to noninvasively determine retinal temperatures during pulsed laser treatment of the eye. This technique can also be adapted to continuous-wave photocoagulation, photodynamic therapy and transpupillary thermotherapy, or other fields of laser-heated tissue.

  13. Differentiation of ocular fundus fluorophores by fluorescence lifetime imaging using multiple excitation and emission wavelengths

    Science.gov (United States)

    Hammer, M.; Schweitzer, D.; Schenke, S.; Becker, W.; Bergmann, A.

    2006-10-01

    Ocular fundus autofluorescence imaging has been introduced into clinical diagnostics recently. It is in use for the observation of the age pigment lipofuscin, a precursor of age - related macular degeneration (AMD). But other fluorophores may be of interest too: The redox pair FAD - FADH II provides information on the retinal energy metabolism, advanced glycation end products (AGE) indicate protein glycation associated with pathologic processes in diabetes as well as AMD, and alterations in the fluorescence of collagen and elastin in connective tissue give us the opportunity to observe fibrosis by fluorescence imaging. This, however, needs techniques able to differentiate particular fluorophores despite limited permissible ocular exposure as well as excitation wavelength (limited by the transmission of the human ocular lens to >400 nm). We present an ophthalmic laser scanning system (SLO), equipped with picosecond laser diodes (FWHM 100 ps, 446 nm or 468 nm respectively) and time correlated single photon counting (TCSPC) in two emission bands (500 - 560 nm and 560 - 700 nm). The decays were fitted by a bi-exponential model. Fluorescence spectra were measured by a fluorescence spectrometer fluorolog. Upon excitation at 446 nm, the fluorescence of AGE, FAD, and lipofuscin were found to peak at 503 nm, 525 nm, and 600 nm respectively. Accordingly, the statistical distribution of the fluorescence decay times was found to depend on the different excitation wavelengths and emission bands used. The use of multiple excitation and emission wavelengths in conjunction with fluorescence lifetime imaging allows us to discriminate between intrinsic fluorophores of the ocular fundus. Taken together with our knowledge on the anatomical structure of the fundus, these findings suggest an association of the short, middle and long fluorescence decay time to the retinal pigment epithelium, the retina, and connective tissue respectively.

  14. Evaluation of Fundus Blood Flow in Normal Individuals and Patients with Internal Carotid Artery Obstruction Using Laser Speckle Flowgraphy

    Science.gov (United States)

    Akiyama, Hideo; Shimoda, Yukitoshi; Li, Danjie; Kishi, Shoji

    2017-01-01

    Purpose We investigated whether laser speckle flowgraphy (LSFG) results are comparable in both eyes and whether it is useful in the diagnosis of disparity in ocular ischemic syndrome (OIS) patients. Methods We compared the mean blur rate (MBR) value for various fundus regions in both eyes of 41 healthy subjects and 15 internal carotid artery occlusion (ICAO) cases. We calculated the standard value of the Laterality Index (LI), which was the MBR comparison of both eyes in each of the regions, in the control subjects. We then investigated the correlation between both eyes for the LIs in the entire fundus, the degree of ICAO and visual function. Results The disparity of the LIs in both eyes was least in the entire area of the fundus in control subjects and there was a significant correlation between both eyes of the 41 healthy individuals (P = 0.019). Significant correlations were found for the LI, visual acuity and degree of ICAO. The specificity and sensitivity of LI in the entire area was 93.8% and 100%, respectively. Conclusions LSFG revealed normal individuals have symmetrical fundus blood flow. LSFG could detect OIS and might be a useful tool for detecting disparities in fundus blood flow. PMID:28056061

  15. Labor-Associated Gene Expression in the Human Uterine Fundus, Lower Segment, and Cervix

    Science.gov (United States)

    Bukowski, Radek; Hankins, Gary D. V; Saade, George R; Anderson, Garland D; Thornton, Steven

    2006-01-01

    Background Preterm labor, failure to progress, and postpartum hemorrhage are the common causes of maternal and neonatal mortality or morbidity. All result from defects in the complex mechanisms controlling labor, which coordinate changes in the uterine fundus, lower segment, and cervix. We aimed to assess labor-associated gene expression profiles in these functionally distinct areas of the human uterus by using microarrays. Methods and Findings Samples of uterine fundus, lower segment, and cervix were obtained from patients at term (mean ± SD = 39.1 ± 0.5 wk) prior to the onset of labor ( n = 6), or in active phase of labor with spontaneous onset ( n = 7). Expression of 12,626 genes was evaluated using microarrays (Human Genome U95A; Affymetrix) and compared between labor and non-labor samples. Genes with the largest labor-associated change and the lowest variability in expression are likely to be fundamental for parturition, so gene expression was ranked accordingly. From 500 genes with the highest rank we identified genes with similar expression profiles using two independent clustering techniques. Sets of genes with a probability of chance grouping by both techniques less than 0.01 represented 71.2%, 81.8%, and 79.8% of the 500 genes in the fundus, lower segment, and cervix, respectively. We identified 14, 14, and 12 those sets of genes in the fundus, lower segment, and cervix, respectively. This enabled networks of co-regulated and co-expressed genes to be discovered. Many genes within the same cluster shared similar functions or had functions pertinent to the process of labor. Conclusions Our results provide support for many of the established processes of parturition and also describe novel-to-labor genes not previously associated with this process. The elucidation of these mechanisms likely to be fundamental for controlling labor is an important prerequisite to the development of effective treatments for major obstetric problems—including prematurity

  16. Labor-associated gene expression in the human uterine fundus, lower segment, and cervix.

    Directory of Open Access Journals (Sweden)

    Radek Bukowski

    2006-06-01

    Full Text Available BACKGROUND: Preterm labor, failure to progress, and postpartum hemorrhage are the common causes of maternal and neonatal mortality or morbidity. All result from defects in the complex mechanisms controlling labor, which coordinate changes in the uterine fundus, lower segment, and cervix. We aimed to assess labor-associated gene expression profiles in these functionally distinct areas of the human uterus by using microarrays. METHODS AND FINDINGS: Samples of uterine fundus, lower segment, and cervix were obtained from patients at term (mean +/- SD = 39.1 +/- 0.5 wk prior to the onset of labor (n = 6, or in active phase of labor with spontaneous onset (n = 7. Expression of 12,626 genes was evaluated using microarrays (Human Genome U95A; Affymetrix and compared between labor and non-labor samples. Genes with the largest labor-associated change and the lowest variability in expression are likely to be fundamental for parturition, so gene expression was ranked accordingly. From 500 genes with the highest rank we identified genes with similar expression profiles using two independent clustering techniques. Sets of genes with a probability of chance grouping by both techniques less than 0.01 represented 71.2%, 81.8%, and 79.8% of the 500 genes in the fundus, lower segment, and cervix, respectively. We identified 14, 14, and 12 those sets of genes in the fundus, lower segment, and cervix, respectively. This enabled networks of co-regulated and co-expressed genes to be discovered. Many genes within the same cluster shared similar functions or had functions pertinent to the process of labor. CONCLUSIONS: Our results provide support for many of the established processes of parturition and also describe novel-to-labor genes not previously associated with this process. The elucidation of these mechanisms likely to be fundamental for controlling labor is an important prerequisite to the development of effective treatments for major obstetric problems

  17. Involvement of vasoactive intestinal polypeptide in nicotine-induced relaxation of the rat gastric fundus

    OpenAIRE

    1997-01-01

    Nicotine-induced relaxation and release of vasoactive intestinal polypeptide (VIP)- and peptide histidine isoleucine (PHI)-like immunoreactivity (LI) were measured in longitudinal muscle strips from the rat gastric fundus.Under non-cholinergic conditions (0.3 μM atropine), nicotine (3–300 μM) produced concentration-dependent relaxations of the 5-hydroxytryptamine (3 μM)-precontracted strips. Under non-adrenergic non-cholinergic (NANC) conditions (0.3 μM atropine+1 μM phentolamine+1 μM nadolol...

  18. Coherent infrared imaging camera (CIRIC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  19. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    known as ‘the poetics of cinema.’ The dissertation embraces two branches of research within this perspective: stylistics and historical poetics (stylistic history). The dissertation takes on three questions in relation to camera movement and is accordingly divided into three major sections. The first...... section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... to illustrate how the functions may mesh in individual camera movements six concrete examples are analyzed. The analyses illustrate how the taxonomy presented can substantiate analysis and interpretation of film style. More generally, the dissertation - and particularly these in-depth analyses - illustrates how...

  20. Camera sensitivity study

    Science.gov (United States)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  1. Proportional counter radiation camera

    Science.gov (United States)

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  2. South African Medical Journal - Vol 78, No 9 (1990)

    African Journals Online (AJOL)

    Detecting asymptomatic coronary artery disease using routine exercise testing and exercise thallium ... Ophthalmoscopy versus non-mydriatic fundus photography in the detection of diabetic retinopathy in black .... Current Issue Atom logo

  3. A STUDY TO COMPARE FUNDUS FLUORESCEIN ANGIOGRAPHY AND OPTICAL COHERENCE TOMOGRAPHY IN AGE RELATED MACULAR DEGENERATION

    Directory of Open Access Journals (Sweden)

    Rani Sujatha

    2016-02-01

    Full Text Available PURPOSE To compare the diagnostic accuracy of optical coherence tomography with Fundus Fluorescein Angiography in diagnosing Age related macular degeneration. METHODS A total 25 patients newly diagnosed as Age related macular degeneration were included in the study. The study was done during the time period between August 2013 to November 2015 this is a prospective randomized hospital based study. RESULTS Maximum no of patients affected belonged to the age group of 50-70 years and 60% were females. The most common symptom was defective vision accounting for 92%. Hypertension and hyperlipidemia were the most common risk factors. 12% of the cases had unilateral disease and 88% had bilateral disease. 6% of eyes were normal in both FFA and OCT. 62% of the eyes by FFA and 61% of the eyes by OCT had dry ARMD and 32 % of the eye by FFA and 33 % by OCT had wet ARMD. CONCLUSION Fundus Fluorescein Angiography is the gold standard tool for screening ARMD and OCT is more specific in detecting early subretinal neovascular membrane and also to assess the activity of the neovascular membranes. Hence OCT is superior to FFA in diagnosing early wet ARMD and thus helps in early management of patients with ARMD.

  4. Comparative inhibitory effects of niflumic acid and novel synthetic derivatives on the rat isolated stomach fundus.

    Science.gov (United States)

    Criddle, David N; Meireles, AnaVanescaP; Macêdo, Liana B; Leal-Cardoso, José H; Scarparo, Henrique C; Jaffar, Mohammed

    2002-02-01

    Novel derivatives of 2-[3-(trifluoromethyl)-analino]nicotinic acid (niflumic acid) were synthesized. The compounds were compared for their inhibitory effects on 5-hydroxytryptamine (5-HT)- and KCI-induced contraction of the rat fundus. The aim was to assess structure-activity relationships regarding the selectivity and potency of these compounds. Niflumic acid (1-100 microM) concentration-dependently inhibited 5-HT-induced tonic contractions with an IC50 value (concentration reducing the control contractile response by 50%, calculated from semi-log graphs) of 0.24 x 10(4) M (n = 9). In contrast, it was significantly less potent at inhibiting KCl-induced responses (IC50 = 1.49 x 10(4) M, n = 9). The methyl ester (NFAme) and amido (NFAm) analogues showed no selectivity between 5-HT- and KCl-induced contractions with IC50 values of 1.64 x 10(-4) M (n = 8) and 1.87 x 10(-4) M (n = 9) for 5-HT responses, and 2.61 x 10(-4) M (n = 8) and 2.55 x 10(-4) M (n = 7) for KCl-induced responses, respectively. Our results suggest that alteration of the carboxylic acid moiety of niflumic acid reduces the selectivity and potency of its inhibitory action on 5-HT-induced contractile responses of the rat fundus, possibly via a reduced interaction with calcium-activated chloride channels.

  5. Bright Retinal Lesions Detection using Colour Fundus Images Containing Reflective Features

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Chaum, Edward [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK)

    2009-01-01

    In the last years the research community has developed many techniques to detect and diagnose diabetic retinopathy with retinal fundus images. This is a necessary step for the implementation of a large scale screening effort in rural areas where ophthalmologists are not available. In the United States of America, the incidence of diabetes is worryingly increasing among the young population. Retina fundus images of patients younger than 20 years old present a high amount of reflection due to the Nerve Fibre Layer (NFL), the younger the patient the more these reflections are visible. To our knowledge we are not aware of algorithms able to explicitly deal with this type of reflection artefact. This paper presents a technique to detect bright lesions also in patients with a high degree of reflective NFL. First, the candidate bright lesions are detected using image equalization and relatively simple histogram analysis. Then, a classifier is trained using texture descriptor (Multi-scale Local Binary Patterns) and other features in order to remove the false positives in the lesion detection. Finally, the area of the lesions is used to diagnose diabetic retinopathy. Our database consists of 33 images from a telemedicine network currently developed. When determining moderate to high diabetic retinopathy using the bright lesions detected the algorithm achieves a sensitivity of 100% at a specificity of 100% using hold-one-out testing.

  6. Fundus Analysis and Visual Prognosis of Macular Hemorrhage in Pathological Myopia without Choroidal Neovasculopathy

    Institute of Scientific and Technical Information of China (English)

    Haitao Li; Feng Wen; De-zheng Wu; Guangwei Luo; Shizhou Huang; Tianqin Guan; Caijiao Liu

    2004-01-01

    Purpose:To analysis and evaluate the fundus characteristics and visual prognosis of macular hemorrhage in pathological myopia without choroidal neovasculopathy. Methods:Thirty-seven patients (38 eyes) of pathological myopia with macular hemorrhage and without choroidal neovascularization (CNV) underwent color photograph and fundus fiuorescein angiography (FFA) examinations. Indocyanine green angiography (ICGA) was also performed on 11 patients (11 eyes). Follow-up ranged from 3 to 21 months.Results :The macular hemorrhage in pathological, myopia without CNV demonstrated oval, less than 1PD, without edema and exudation. Lacquer cracks appeared at the site of previous subretinal bleeding in 84.2% of the eyes. The visual acuities were improved in 81.6% of eyes during the follow-up period. ICGA revealed linear hypofluorescence in 7 of 11 eyes (63.6%), indicating a ruptured Bruch's membrance at the onset of subretinal bleeding.Conclusion: A rupture of choriocapillaris complex and Bruch's membrane causes macular hemorrhage of pathological myopia without CNV, leading to the formation of a new lacquer crack. Its prognosis is favorable. Eye Science 2004;20:57-62.

  7. Unusual optical coherence tomography and fundus autofluorescence findings of eclipse retinopathy

    Directory of Open Access Journals (Sweden)

    Kun-Hsien Li

    2012-01-01

    Full Text Available A 63-year-old female patient complained of dimness in the central field of vision in the left eye after viewing an annular partial eclipse without adequate eye protection on 22 July 2009. Fundoscopy showed a wrinkled macular surface. Fundus autofluorescence study revealed well-demarcated hyperautofluorescence at the fovea. Optical coherence tomography demonstrated tiny intraretinal cysts. Fluorescein angiography and indocyanine green angiography were unremarkable. Epimacular membrane developed in the following month with deteriorated vision. Vitrectomy, epiretinal membrane and internal limiting membrane peeling were performed. Vision was restored to 20/20 after the operation. Direct sun-gazing may damage the retinal structures resulting in macular inflammation and increased focal metabolism, which explains the hyperautofluorescence. It may also induce epimacular membrane. Fundus autofluorescence might represent a useful technique to detect subtle solar-induced injuries of the retina. The visual prognosis is favorable but prevention remains the mainstay of treatment. Public health education is mandatory in reducing visual morbidity.

  8. Ultrawide-field fundus photography of the first reported case of gyrate atrophy from Australia

    Directory of Open Access Journals (Sweden)

    Moloney TP

    2014-08-01

    Full Text Available Thomas P Moloney,1 Stephen O’Hagan,1 Lawrence Lee2,3 1Department of Ophthalmology, Cairns Hospital, Cairns, QLD, Australia; 2City Eye Centre, Brisbane, QLD, Australia; 3Associate Professor of Ophthalmology, School of Medicine, University of Queensland, Brisbane, QLD, Australia Abstract: Gyrate atrophy of the choroid and retina is a rare chorioretinal dystrophy inherited in an autosomal recessive pattern. We describe the first documented case of gyrate atrophy from Australia in a 56-year-old woman with a history of previous diagnosis of retinitis pigmentosa and worsening night vision in her right eye over several years. She was myopic and bilaterally pseudophakic, and fundus examination revealed pale optic discs and extensive peripheral chorioretinal atrophy exposing bare sclera bilaterally with only small islands of normal-appearing retina at each posterior pole. Visual field testing showed grossly constricted fields, blood testing showed hyperornithinemia, and further questioning revealed consanguinity between the patient’s parents. We then used the patient’s typical retinal findings of gyrate atrophy to demonstrate the potential use of ultrawide-field fundus photography and angiography in diagnosis and monitoring response in future treatment. Keywords: gyrate atrophy, ultrawide-field retinal photography, angiography, retinal photography, hyperornithinemia

  9. Quantitative Analysis of Fundus-Image Sequences Reveals Phase of Spontaneous Venous Pulsations

    Science.gov (United States)

    Moret, Fabrice; Reiff, Charlotte M.; Lagrèze, Wolf A.; Bach, Michael

    2015-01-01

    Purpose Spontaneous venous pulsation correlates negatively with elevated intracranial pressure and papilledema, and it relates to glaucoma. Yet, its etiology remains unclear. A key element to elucidate its underlying mechanism is the time at which collapse occurs with respect to the heart cycle, but previous reports are contradictory. We assessed this question in healthy subjects using quantitative measurements of both vein diameters and artery lateral displacements; the latter being used as the marker of the ocular systole time. Methods We recorded 5-second fundus sequences with a near-infrared scanning laser ophthalmoscope in 12 young healthy subjects. The image sequences were coregistered, cleaned from microsaccades, and filtered via a principal component analysis to remove nonpulsatile dynamic features. Time courses of arterial lateral displacement and of diameter at sites of spontaneous venous pulsation or proximal to the disk were retrieved from those image sequences and compared. Results Four subjects displayed both arterial and venous pulsatile waveforms. On those, we observed venous diameter waveforms differing markedly among the subjects, ranging from a waveform matching the typical intraocular pressure waveform to a close replica of the arterial waveform. Conclusions The heterogeneity in waveforms and arteriovenous phases suggests that the mechanism governing the venous outflow resistance differs among healthy subjects. Translational relevance Further characterizations are necessary to understand the heterogeneous mechanisms governing the venous outflow resistance as this resistance is altered in glaucoma and is instrumental when monitoring intracranial hypertension based on fundus observations. PMID:26396929

  10. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  11. Status of the FACT camera

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Quirin [ETH Zurich, Institute for Particle Physics, 8093 Zurich (Switzerland); Collaboration: FACT-Collaboration

    2011-07-01

    The First G-APD Cherenkov Telescope (FACT) project develops a novel camera type for very high energy gamma-ray astronomy. A total of 1440 Geiger-mode avalanche photodiodes (G-APD) are used for light detection, each accompanied by a solid light concentrator. All electronics for analog signal processing, digitization and triggering are fully integrated into the camera body. The event data are sent via Ethernet to the counting house. In order to compensate for gain variations of the G-APDs an online feedback system analyzing calibration light pulses is employed. Once the construction and commissioning of the camera is finished it will be transported to La Palma, Canary Islands, and mounted on the refurbished HEGRA CT3 telescope structure. In this talk the architecture and status of the FACT camera is presented.

  12. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  13. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  14. Prognostic impact of metastatic lymph node ratio in advanced gastric cancer from cardia and fundus

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    AIM: To investigate the prognostic impact of the metastatic lymph node ratio (MLR) in advanced gastric cancer from the cardia and fundus. METHODS: Two hundred and thirty-six patients with gastric cancer from the cardia and fundus who underwent D2 curative resection were analyzed ret- rospectively. The correlations between MLR and the total lymph nodes, positive nodes and the total lymph nodes were analyzed respectively. The influence of MLR on the survival time of patients was determined with univariate Kaplan-Meier survival analysis and mul- tivariate Cox proportional hazard model analysis. And the multiple linear regression was used to identify the relation between MLR and the 5-year survival rate of the patients. RESULTS: The MLR did not correlate with the total lymph nodes resected (r = -0.093, P = 0.057). The 5-year overall survival rate of the whole cohort was 37.5%. Kaplan-Meier survival analysis identified that the following eight factors influenced the survival time of the patients postoperatively: gender (X2 = 4.26, P = 0.0389), tumor size (X2 = 18.48, P < 0.001), Borrmann type (X2 = 7.41, P = 0.0065), histological grade (X2 =5.07, P = 0.0243), pT category (X2 = 49.42, P < 0.001), pN category (X2 = 87.7, P < 0.001), total number of re- trieved lymph nodes (X2 = 8.22, P = 0.0042) and MLR (X2 = 34.3, P < 0.001). Cox proportional hazard model showed that tumor size (X2 = 7.985, P = 0.018), pT category (X2 = 30.82, P < 0.001) and MLR (X2 = 69.39, P < 0.001) independently influenced the prognosis. A linear correlation between MLR and the 5-year survival was statistically significant based on the multiple lin- ear regression (β = -0.63, P < 0.001). Hypothetically, the 5-year survival would surpass 50% when MLR was lower than 10%. CONCLUSION: The MLR is an independent prognostic factor for patients with advanced gastric cancer from the cardia and fundus. The decrease of MLR due to adequate number of total resected lymph nodes can improve the survival.

  15. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  16. Primary gastric fundus tuberculosis in immunocompetent patient: a case report and literature review

    Directory of Open Access Journals (Sweden)

    Fahmi Yousef Khan

    2008-10-01

    Full Text Available We report on a 29-year-old Pakistani man who presented to the clinic with epigastric pain, of one-month duration. He did not report fever, cough, vomiting blood, passing black stools, loss of appetite or diarrhea. However, he had lost 7 kg since his symptoms had begun. Clinical examination was unremarkable. Laboratory results were within normal limits. An abdominal CT scan showed a mass with enhancement in the stomach. Gastric endoscopy revealed an ulcerative mass in the fundus. An endoscopic-biopsy specimen revealed caseating granulomas with acid-fast bacilli. The patient was diagnosed to have primary gastric tuberculosis, and antituberculous medications were initiated. Cultures of the gastric mass subsequently grew Mycobacterium tuberculosis sensitive to isoniazid and rifampcin. Follow-up after six months showed a good response to treatment; an upper gastrointestinal tract endoscopy after six months was normal.

  17. Classification of Diabetic Macular Edema and Its Stages Using Color Fundus Image

    Institute of Scientific and Technical Information of China (English)

    Muhammad Zubair; Shoab A. Khan; Ubaid Ullah Yasin

    2014-01-01

    Diabetic macular edema (DME) is a retinal thickening involving the center of the macula. It is one of the serious eye diseases which affects the central vision and can lead to partial or even complete visual loss. The only cure is timely diagnosis, prevention, and treatment of the disease. This paper presents an automated system for the diagnosis and classification of DME using color fundus image. In the proposed technique, first the optic disc is removed by applying some preprocessing steps. The preprocessed image is then passed through a classifier for segmentation of the image to detect exudates. The classifier uses dynamic thresholding technique by using some input parameters of the image. The stage classification is done on the basis of anearly treatment diabetic retinopathy study (ETDRS) given criteria to assess the severity of disease. The proposed technique gives a sensitivity, specificity, and accuracy of 98.27%, 96.58%, and 96.54%, respectively on publically available database.

  18. Automatic differentiation of color fundus images containing drusen or exudates using a contextual spatial pyramid approach.

    Science.gov (United States)

    van Grinsven, Mark J J P; Theelen, Thomas; Witkamp, Leonard; van der Heijden, Job; van de Ven, Johannes P H; Hoyng, Carel B; van Ginneken, Bram; Sánchez, Clara I

    2016-03-01

    We developed an automatic system to identify and differentiate color fundus images containing no lesions, drusen or exudates. Drusen and exudates are lesions with a bright appearance, associated with age-related macular degeneration and diabetic retinopathy, respectively. The system consists of three lesion detectors operating at pixel-level, combining their outputs using spatial pooling and classification with a random forest classifier. System performance was compared with ratings of two independent human observers using human-expert annotations as reference. Kappa agreements of 0.89, 0.97 and 0.92 and accuracies of 0.93, 0.98 and 0.95 were obtained for the system and observers, respectively.

  19. Region-based multi-step optic disk and cup segmentation from color fundus image

    Science.gov (United States)

    Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan

    2013-02-01

    Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.

  20. A study of prevalence and association of fundus changes in pregnancy induced hypertension

    Directory of Open Access Journals (Sweden)

    Varija T.

    2016-05-01

    Results: Out of the total 423 patients of PIH examined, the retinal changes (hypertensive retinopathy changes were noted in 181 (42.7% patients. The prevalence of retinopathy changes was more among patients with imminent Eclampsia (76.5% and eclampsia patients (50%. As the severity of the PIH increased the Odds of women developing retinopathy also increased substantially from OR: 17.6; 95% CI: 3.1-136.3 in severe PIH to OR: 253; 95% CI: 47.2-1935 in Imminent eclampsia and this association between the severity of PIH and the development of retinopathy changes was found to be statistically significant. Conclusions: Fundus examination in cases of PIH is of paramount importance in monitoring and managing cases as it co-relates with the severity of PIH. [Int J Reprod Contracept Obstet Gynecol 2016; 5(5.000: 1375-1379

  1. DISCOVERING ABNORMAL PATCHES AND TRANSFORMATIONS OF DIABETICS RETINOPATHY IN BIG FUNDUS COLLECTIONS

    Directory of Open Access Journals (Sweden)

    Yuqian ZHOU

    2017-01-01

    Full Text Available Diabetic retinopathy (DR is one of the retinal diseases due to long-term effect of diabetes. Early detection for diabetic retinopathy is crucial since timely treatment can prevent progressive loss of vision. The most common diagnosis technique of diabetic retinopathy is to screen abnormalities through retinal fundus images by clinicians. However, limited number of well-trained clinicians increase the possibilities of misdiagnosing. In this work, we propose a big-data-driven automatic computer-aided diagnosing (CAD system for diabetic retinopathy severity regression based on transfer learning, which starts from a deep convolutional neural network pre-trained on generic images, and adapts it to large-scale DR datasets. From images in the training set, we also automatically segment the abnormal patches with an occlusion test, and model the transformations and deterioration process of DR. Our results can be widely used for fast diagnosis of DR, medical education and public-level healthcare propagation.

  2. The Effects of Histamine H3 Receptors on Contractile Responses on Rat Gastric Fundus

    Directory of Open Access Journals (Sweden)

    Aşkın Hekimoğlu

    2006-01-01

    Full Text Available The aim of this study is to determine the effects of histamine receptors on the gastrointestinal system smooth muscle contractions and the role of histamine H3 receptors on these effects. İsolated rat gastric fundus preparations were hanged on isolated organ bath and histamine receptor agonist and anthagonists were added to the bath solution and the electrical field stimulation-induced contractile responses were evaluated. In our study groups after blocking one of the histamine receptors H1, H2,H3; contractile responses were observed. Then, other two receptors were blocked one by one or combination of them to observe the changes on the contractile responses given to the electrical stimulation .To blocke histamine receptors pyrilamine (10-6м as H1 receptor blocker, famotidine (10-6м as H2 receptor blocker and thioperamide (10-5м as H3 receptor blocker and various combination of them were used. All groups were treated with H3 receptor anthagonist thioperamide (10-5м and agonist (R-α-methylhistamine (RMHA on 10-8, 10-7, 10-6 ve 10-5 molar concentrations cumulatively to observe its mediator effects on contractile responses. We suggested that (R-α-methylhistamine mediates the inhibition on the contractile effects of rat gastric fundus. This conclusion was supported by these findings: a the selective agonists (RMHA caused a dumping of the contractile effect of acetylcholine; b the effect of (RMHA was prevented by the selective H3 receptor antagonist thioperamide.

  3. Evaluation of peripheral fundus autofluorescence in eyes with wet age-related macular degeneration

    Directory of Open Access Journals (Sweden)

    Suetsugu T

    2016-12-01

    Full Text Available Tetsuyuki Suetsugu,1,2 Aki Kato,1 Munenori Yoshida,1 Tsutomu Yasukawa,1 Akiko Nishiwaki,1,3 Norio Hasegawa,1 Hideaki Usui,1 Yuichiro Ogura1 1Department of Ophthalmology and Visual Science, Nagoya City University Graduate School of Medical Sciences, 2Department of Ophthalmology, General Kamiiida Daiichi Hospital, 3Nishiwaki Eye Clinic, Nagoya, Aichi, Japan Purpose: We aimed to evaluate the prevalence of abnormal peripheral fundus autofluorescence (FAF in wet age-related macular degeneration (AMD using wide-field imaging instrument. Patients and methods: A retrospective, case-controlled study involving 66 eyes of 46 Japanese wet AMD patients and 32 eyes of 20 control patients was performed. Wide-field FAF images were obtained for typical AMD (37 eyes/28 patients, polypoidal choroidal vasculopathy (PCV (22 eyes/20 patients, and retinal angiomatous proliferation (RAP (seven eyes/four patients. Two masked ophthalmologists independently graded the images for mottled, granular, and nummular patterns. Main outcome measures were abnormal peripheral FAF frequencies and relative risks by disease subgroups and treatments. Results: Abnormal peripheral FAF patterns were found in 51.5% of wet AMD eyes compared with 18.8% of control eyes (P<0.001. Mottled, granular, and nummular patterns were found in 45.5%, 31.8%, and 16.7%, respectively, of wet AMD eyes. Each disease subgroup (typical AMD, 54.1%; PCV, 36.4%; and RAP, 85.7% showed significantly higher frequencies of peripheral FAF (P<0.001, P=0.03, and P<0.001, respectively than control eyes (18.8%. There were no significant differences (P=0.76 between the frequencies in untreated and treated eyes. Conclusion: Eyes of Japanese wet AMD patients had a higher abnormal FAF prevalence compared with control eyes. Among the three disease subtypes, abnormal patterns were least prevalent in PCV eyes. Keywords: age-related macular degeneration, fundus autofluorescence, polypoidal choroidal vasculopathy, retinal

  4. Automated detection of optic disk in retinal fundus images using intuitionistic fuzzy histon segmentation.

    Science.gov (United States)

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chua, Chua Kuang; Min, Lim Choo; Ng, E Y K; Mushrif, Milind M; Laude, Augustinus

    2013-01-01

    The human eye is one of the most sophisticated organs, with perfectly interrelated retina, pupil, iris cornea, lens, and optic nerve. Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Uncontrolled diabetic retinopathy (DR) and glaucoma may lead to blindness. The identification of retinal anatomical regions is a prerequisite for the computer-aided diagnosis of several retinal diseases. The manual examination of optic disk (OD) is a standard procedure used for detecting different stages of DR and glaucoma. In this article, a novel automated, reliable, and efficient OD localization and segmentation method using digital fundus images is proposed. General-purpose edge detection algorithms often fail to segment the OD due to fuzzy boundaries, inconsistent image contrast, or missing edge features. This article proposes a novel and probably the first method using the Attanassov intuitionistic fuzzy histon (A-IFSH)-based segmentation to detect OD in retinal fundus images. OD pixel intensity and column-wise neighborhood operation are employed to locate and isolate the OD. The method has been evaluated on 100 images comprising 30 normal, 39 glaucomatous, and 31 DR images. Our proposed method has yielded precision of 0.93, recall of 0.91, F-score of 0.92, and mean segmentation accuracy of 93.4%. We have also compared the performance of our proposed method with the Otsu and gradient vector flow (GVF) snake methods. Overall, our result shows the superiority of proposed fuzzy segmentation technique over other two segmentation methods.

  5. The importance of fundus eye testing in rubella-induced deafness.

    Science.gov (United States)

    Tamayo, Marta L; García, Natalia; Bermúdez Rey, María Carolina; Morales, Lisbeth; Flórez, Silvia; Varón, Clara; Gelvez, Nancy

    2013-09-01

    The purpose of this study was to establish a new approach to improve detection of deafness due to rubella. Colombian institutes for the deaf were visited by a medical team to perform in all enrolled individuals an ophthalmological examination with emphasis in fundus eye by a retina specialist. In cases where ocular alterations compatible with CRS were found, a medical interview by a clinical geneticist analyzing pre-and postnatal history and a thorough medical examination was done. A total of 1383 deaf institutionalized individuals were evaluated in 9 Colombian cities in the period of 2005 to 2006, finding a total of 463 positive cases for salt-and-pepper retinopathy (33.5%), in which rubella could be the etiology of deafness. Medellin, Cartagena, Bucaramanga and Barranquilla were the cities with the highest percentage of Congenital rubella, corresponding to 22.8% of analyzed population. The analysis performed on cases in which reliable prenatal history was obtained in a second appointment (n=88) showed association between positive viral symptoms during pregnancy and salt-and-pepper retinopathy in 62.5% of cases, while both (retinopathy and viral symptoms) were absent in 29.5% of cases; showing a correlation in 92% of cases. The frequency of deafness by rubella obtained by this study is significantly high compared with previous Colombian studies and with international reports. It was possible to correlate the antecedent of symptoms during pregnancy with the presence of salt-and-pepper retinopathy in this deaf population when reliable prenatal history was available, therefore eye testing with emphasis in fundus examination is a good indicator of rubella induced deafness. We propose a new approach in the search of deafness causes, based on a thorough ophthalmologic examination in all deaf people. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Two-port laparoscopic cholecystectomy with modified suture retraction of the fundus: A practical approach

    Directory of Open Access Journals (Sweden)

    Ming G Tian

    2013-01-01

    Full Text Available Context: Although transumbilical single incision laparoscopic cholecystectomy (SILC has been demonstrated to be superior cosmetic, it is only limited to simple cases at present. In complex cases, the standard four- or three-port LC is still the treatment of choice. Aim: To summarize the clinical effect of a modified technique in two-port LC. Settings and Design: A consecutive series of patients with benign gallbladder diseases admitted to the provincial teaching hospital who underwent LC in the past 4 years were included. A modified two-port LC was the first choice except for those requiring laparoscopic common bile duct exploration (LCBDE. Materials and Methods: The operation was done with suture retraction of the fundus by a needle-like retractor. The patients′ data, including the operative time, time consumed by gallbladder retraction, operative bleeding, conversion rate, rate of adding trocars, and postoperative complications were recorded. Statistical Analysis: Data were expressed as percentage and mean with standard deviation. Results: Total 107 patients with chronic calculous cholecystitis (N = 61, acute calculous cholecystitis (N = 43, and cholecystic polyps (N = 3 received two-port LC. The procedure was successful in 99 out of 107 cases (success rate, 92.5%, and a third trocar was added in the remaining 8 cases (7.5% due to severe pathological changes. The operative time was 47.2 (±13.21 min. There was no conversion to open surgery. Conclusion: Two-port LC using a needle-like retractor for suture retraction of the gallbladder fundus is a practical approach when considering the safety, convenience, and indications as well as relatively minimal invasion.

  7. Application of 3-dimensional printing technology to construct an eye model for fundus viewing study.

    Directory of Open Access Journals (Sweden)

    Ping Xie

    Full Text Available To construct a life-sized eye model using the three-dimensional (3D printing technology for fundus viewing study of the viewing system.We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs. Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D printing. Together with the main printed structure, polymethyl methacrylate(PMMA aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy.In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro's and Bakaraju's model eye, much smaller than Arianpour's model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro's eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value.The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus range viewing research.

  8. Spectral-domain optical coherence tomographic and fundus autofluorescence findings in eyes with primary intraocular lymphoma

    Directory of Open Access Journals (Sweden)

    Egawa M

    2014-01-01

    Full Text Available Mariko Egawa, Yoshinori Mitamura, Yuki Hayashi, Takeshi NaitoDepartment of Ophthalmology, Institute of Health Biosciences, The University of Tokushima Graduate School, Tokushima, JapanBackground: The purpose of this study was to evaluate the findings on spectral-domain optical coherence tomography (SD-OCT and fundus autofluorescence (FAF in three eyes with primary intraocular lymphoma (PIOL.Methods: The medical records of three eyes from three patients with biopsy-proven PIOL and retinal infiltrations were reviewed. The SD-OCT and fluorescein angiographic findings were evaluated in the three eyes and FAF images in two eyes.Results: The PIOL in the three patients was monocular. Vitreous opacities and retinal infiltrations were observed in the three eyes, and iritis was present in two eyes. The cytologic diagnosis was class V in two eyes and class III in one eye. The interleukin-10/interleukin-6 ratio was >1.0 in the vitreous and aqueous humor of the three eyes. The FAF images for two eyes showed abnormal granular hyperautofluorescence and hypoautofluorescence which were the reverse of the pattern in the fluorescein angiographic images. In all three eyes, SD-OCT showed hyper-reflective infiltrations at the level of the retinal pigment epithelium (RPE, a separation of the Bruch membrane from the RPE, damage to the RPE, disruption of the photoreceptor inner segment/outer segment junction, and multiple hyper-reflective signals in the inner retina.Conclusion: Because of the characteristic FAF and SD-OCT findings in these eyes with PIOL, we suggest that these noninvasive methods may be used for a rapid diagnosis of PIOL and also for understanding the pathology of PIOL.Keywords: spectral-domain optical coherence tomography, fundus autofluorescence, primary intraocular lymphoma

  9. GCaMP expression in retinal ganglion cells characterized using a low-cost fundus imaging system

    Science.gov (United States)

    Chang, Yao-Chuan; Walston, Steven T.; Chow, Robert H.; Weiland, James D.

    2017-10-01

    Objective. Virus-transduced, intracellular-calcium indicators are effective reporters of neural activity, offering the advantage of cell-specific labeling. Due to the existence of an optimal time window for the expression of calcium indicators, a suitable tool for tracking GECI expression in vivo following transduction is highly desirable. Approach. We developed a noninvasive imaging approach based on a custom-modified, low-cost fundus viewing system that allowed us to monitor and characterize in vivo bright-field and fluorescence images of the mouse retina. AAV2-CAG-GCaMP6f was injected into a mouse eye. The fundus imaging system was used to measure fluorescence at several time points post injection. At defined time points, we prepared wholemount retina mounted on a transparent multielectrode array and used calcium imaging to evaluate the responsiveness of retinal ganglion cells (RGCs) to external electrical stimulation. Main results. The noninvasive fundus imaging system clearly resolves individual (RGCs and axons. RGC fluorescence intensity and the number of observable fluorescent cells show a similar rising trend from week 1 to week 3 after viral injection, indicating a consistent increase of GCaMP6f expression. Analysis of the in vivo fluorescence intensity trend and in vitro neurophysiological responsiveness shows that the slope of intensity versus days post injection can be used to estimate the optimal time for calcium imaging of RGCs in response to external electrical stimulation. Significance. The proposed fundus imaging system enables high-resolution digital fundus imaging in the mouse eye, based on off-the-shelf components. The long-term tracking experiment with in vitro calcium imaging validation demonstrates the system can serve as a powerful tool monitoring the level of genetically-encoded calcium indicator expression, further determining the optimal time window for following experiment.

  10. Traditional gamma cameras are preferred.

    Science.gov (United States)

    DePuey, E Gordon

    2016-08-01

    Although the new solid-state dedicated cardiac cameras provide excellent spatial and energy resolution and allow for markedly reduced SPECT acquisition times and/or injected radiopharmaceutical activity, they have some distinct disadvantages compared to traditional sodium iodide SPECT cameras. They are expensive. Attenuation correction is not available. Cardio-focused collimation, advantageous to increase depth-dependent resolution and myocardial count density, accentuates diaphragmatic attenuation and scatter from subdiaphragmatic structures. Although supplemental prone imaging is therefore routinely advised, many patients cannot tolerate it. Moreover, very large patients cannot be accommodated in the solid-state camera gantries. Since data are acquired simultaneously with an arc of solid-state detectors around the chest, no temporally dependent "rotating" projection images are obtained. Therefore, patient motion can be neither detected nor corrected. In contrast, traditional sodium iodide SPECT cameras provide rotating projection images to allow technologists and physicians to detect and correct patient motion and to accurately detect the position of soft tissue attenuators and to anticipate associated artifacts. Very large patients are easily accommodated. Low-dose x-ray attenuation correction is widely available. Also, relatively inexpensive low-count density software is provided by many vendors, allowing shorter SPECT acquisition times and reduced injected activity approaching that achievable with solid-state cameras.

  11. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  12. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  13. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  14. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  15. Multi-spectral camera development

    CSIR Research Space (South Africa)

    Holloway, M

    2012-10-01

    Full Text Available stream_source_info Holloway_2012.pdf.txt stream_content_type text/plain stream_size 6209 Content-Encoding ISO-8859-1 stream_name Holloway_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Multi-Spectral Camera... Development 4th Biennial Conference Presented by Mark Holloway 10 October 2012 Fused image ? Red, Green and Blue Applications of the Multi-Spectral Camera ? CSIR 2012 Slide 2 Green and Blue, Near Infrared (IR) RED Applications of the Multi...

  16. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  17. OSIRIS camera barrel optomechanical design

    Science.gov (United States)

    Farah, Alejandro; Tejada, Carlos; Gonzalez, Jesus; Cobos, Francisco J.; Sanchez, Beatriz; Fuentes, Javier; Ruiz, Elfego

    2004-09-01

    A Camera Barrel, located in the OSIRIS imager/spectrograph for the Gran Telescopio Canarias (GTC), is described in this article. The barrel design has been developed by the Institute for Astronomy of the University of Mexico (IA-UNAM), in collaboration with the Institute for Astrophysics of Canarias (IAC), Spain. The barrel is being manufactured by the Engineering Center for Industrial Development (CIDESI) at Queretaro, Mexico. The Camera Barrel includes a set of eight lenses (three doublets and two singlets), with their respective supports and cells, as well as two subsystems: the Focusing Unit, which is a mechanism that modifies the first doublet relative position; and the Passive Displacement Unit (PDU), which uses the third doublet as thermal compensator to maintain the camera focal length and image quality when the ambient temperature changes. This article includes a brief description of the scientific instrument; describes the design criteria related with performance justification; and summarizes the specifications related with misalignment errors and generated stresses. The Camera Barrel components are described and analytical calculations, FEA simulations and error budgets are also included.

  18. 新生儿眼底筛查及其眼底疾病高危因素分析%Analysis of the High Risk Factors of Fundus Screening and Fundus Diseases in Neonates

    Institute of Scientific and Technical Information of China (English)

    叶青; 何晓平

    2016-01-01

    目的:探讨新生儿眼底筛查及其眼底疾病高危因素分析。方法:回顾性分析2013年10月-2016年6月本院11270例新生儿眼底筛查情况,根据儿科指征将新生儿分为早产儿组(n=716)、高危儿组(n=831)和正常儿组(n=9723),比较三组研究对象的眼底疾病情况。结果:检出眼底异常占13.55%(1527/11270),早产儿组眼底异常检出率为16.48%(118/716),高危儿组为17.57%(146/831),正常儿组为12.99%(1263/9723),早产儿组及高危儿组眼底异常检出率均高于正常儿组,差异均有统计学意义(P<0.05)。经Logistic回归分析可知,机械通气和自然分娩是致使新生儿出现视网膜出血的高危因素(OR=1.754、3.263,P<0.05)。结论:新生儿眼底病变呈多样性且具有严重危害性,经眼底筛查可及时发现,对高危因素强加干扰,为挽救新生儿的视力甚至生命赢得宝贵的时间。%Objective:To analyze the high risk factors of fundus screening and fundus diseases in neonates. Method:11 270 cases of neonatal ocular fundus screening in our hospital from October 2013 to June 2016 were retrospective analyzed.According to the pediatric indications,neonates were divided into preterm group(n=716), high risk group(n=831) and normal group(n=9723),and the fundus diseases were compared between the three groups.Result:Fundus abnormalities were detected in 13.55% (1527/11 270),the detection rate of fundus abnormalities in preterm group was 16.48%(118/716),the high risk group was 17.57%(146/831),and the normal group was 12.99%(1263/9723),the detection rate of abnormal fundus in premature group and high risk group were higher than that in normal group,the differences were statistically significant(P<0.05).By Logistic regression analysis,mechanical ventilation and natural delivery were the high risk factors of retinal hemorrhage in newborn infants(OR=1.754,3.263,P<0.05).Conclusion:The newborn fundus

  19. Microvascular findings in patients with systemic lupus erythematosus assessed by fundus photography with fluorescein angiography.

    Science.gov (United States)

    Lee, Ji-Hyun; Kim, Sang-Soo; Kim, Geun-Tae

    2013-01-01

    Although a series of trials support systemic lupus erythematosus (SLE) is associated with increased atherosclerosis and cardiovascular events, the link between microvascular structural change and the disease activity of SLE is not defined. We measured retinal microvasculature change by fundus photography with fluorescein angiography (FAG) and investigated the association between retinal vasculature and clinical parameters of SLE. Fifty SLE patients and fifty healthy controls were included. Morphometric and quantitative features of the capillary image including retinal vascular sign and vessel diameters were measured with fundus photography and FAG. Information concerning SLE duration, cumulative dose of steroids and/or immunosuppressive drug intake was recorded, and autoantibodies were checked. SLE activity was assessed by SLE disease activity index (SLEDAI). The mean central retinal arteriolar equivalent (CRAE) was 89.7±14.5 μm in SLE patients, showing narrower arteriole than that of controls (102.2±11.3 μm). The mean central retinal venular equivalents (CRVE) was 127.7±14.8 μm in SLE patients, also, narrower than that of controls (144.1±14.2 μm), but both reached no statistical significance (p=0.154, p=0.609, respectively). Retinopathy was found in 26% of SLE patients. SLE patients with retinopathy were older than those without it, but reached no statistical significance. Disease duration, antidsDNA, and complement levels had no effect on the presence of retinopathy. SLE patients with retinopathy had a tendency to have higher cumulative steroid doses, hsCRP and IgG aCL levels than those without retinopathy. With multiple regression analysis, hsCRP and IgG aCL were identified as contributing factors to the decreased CRAE, whereas no contributing factor was found to CRVE. Retinopathy and retinal arteriolar narrowing were more common in SLE patients, and retinal arteriolar diameter had significant correlation with hsCRP and IgG aCL levels. Retinal imaging is

  20. A Method of Drusen Measurement Based on the Geometry of Fundus Reflectance

    Directory of Open Access Journals (Sweden)

    Barbazetto Irene

    2003-04-01

    Full Text Available Abstract Background The hallmarks of age-related macular degeneration, the leading cause of blindness in the developed world, are the subretinal deposits known as drusen. Drusen identification and measurement play a key role in clinical studies of this disease. Current manual methods of drusen measurement are laborious and subjective. Our purpose was to expedite clinical research with an accurate, reliable digital method. Methods An interactive semi-automated procedure was developed to level the macular background reflectance for the purpose of morphometric analysis of drusen. 12 color fundus photographs of patients with age-related macular degeneration and drusen were analyzed. After digitizing the photographs, the underlying background pattern in the green channel was leveled by an algorithm based on the elliptically concentric geometry of the reflectance in the normal macula: the gray scale values of all structures within defined elliptical boundaries were raised sequentially until a uniform background was obtained. Segmentation of drusen and area measurements in the central and middle subfields (1000 μm and 3000 μm diameters were performed by uniform thresholds. Two observers using this interactive semi-automated software measured each image digitally. The mean digital measurements were compared to independent stereo fundus gradings by two expert graders (stereo Grader 1 estimated the drusen percentage in each of the 24 regions as falling into one of four standard broad ranges; stereo Grader 2 estimated drusen percentages in 1% to 5% intervals. Results The mean digital area measurements had a median standard deviation of 1.9%. The mean digital area measurements agreed with stereo Grader 1 in 22/24 cases. The 95% limits of agreement between the mean digital area measurements and the more precise stereo gradings of Grader 2 were -6.4 % to +6.8 % in the central subfield and -6.0 % to +4.5 % in the middle subfield. The mean absolute

  1. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  2. Toxoplasmosis with chorioretinitis in an HIV-infected child with no visual complaints—importance of fundus examination

    Science.gov (United States)

    Pereira, Noella Maria Delia; Shah, Ira; Lala, Mamatha

    2017-01-01

    Central nervous system lesions are common in HIV-infected patients. In the combination anti-retroviral therapy (ART) era, Toxoplasma reactivation has been observed only in patients with unrecognized HIV infection or refusing therapy. We present the case of 10-year-old girl with AIDS who initially presented with pneumonia. She was treated for pneumonia and thereafter started on ART as her CD4 count was low. However, 5 days after starting ART she presented with left ptosis and right-sided monoparesis. She was diagnosed with neurotoxoplasmosis and responded successfully to pyrimethamine–sulfadoxine therapy. Though she had no vision difficulties, her fundus examination revealed chorioretinitis during the hospital stay. We emphasize the importance of routine fundus examination prior to starting ART to rule out chorioretinitis even in an older child with no visual complaints. PMID:28058107

  3. Toxoplasmosis with chorioretinitis in an HIV-infected child with no visual complaints-importance of fundus examination.

    Science.gov (United States)

    Pereira, Noella Maria Delia; Shah, Ira; Lala, Mamatha

    2017-01-01

    Central nervous system lesions are common in HIV-infected patients. In the combination anti-retroviral therapy (ART) era, Toxoplasma reactivation has been observed only in patients with unrecognized HIV infection or refusing therapy. We present the case of 10-year-old girl with AIDS who initially presented with pneumonia. She was treated for pneumonia and thereafter started on ART as her CD4 count was low. However, 5 days after starting ART she presented with left ptosis and right-sided monoparesis. She was diagnosed with neurotoxoplasmosis and responded successfully to pyrimethamine-sulfadoxine therapy. Though she had no vision difficulties, her fundus examination revealed chorioretinitis during the hospital stay. We emphasize the importance of routine fundus examination prior to starting ART to rule out chorioretinitis even in an older child with no visual complaints.

  4. Comparison of subjective and objective methods to determine the retinal arterio-venous ratio using fundus photography

    OpenAIRE

    Heitmar, Rebekka; Kalitzeos, Angelos A.; Patel, Sunni R.; Prabhu-Das, Diana; Cubbidge, Robert P.

    2015-01-01

    Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeis...

  5. RISK FACTORS FOR DIABETIC RETINOPATHY IN DIABETICS SCREENED USING FUNDUS PHOTOGRAPHY AT A PRIMARY HEALTH CARE SETTING IN EAST MALAYSIA

    Directory of Open Access Journals (Sweden)

    MALLIKA PS

    2011-01-01

    Full Text Available Introduction: This study reports on the prevalence of diabetic retinopathy (DR and risk factors among diabetic patients, who underwent fundus photography screening in a primary care setting of Borneo Islands, East Malaysia. We aimed to explore the preliminary data to help in the planning of more effective preventive strategies of DR at the primary health care setting. Materials and Methods: A cross-sectional study on 738 known diabetic patients aged 19-82 years was conducted in 2004. Eye examination consists of visual acuity testing followed by fundus photography for DR assessment. The fundus pictures were reviewed by a family physician and an ophthalmologist. Fundus photographs were graded as having no DR, NPDR, PDR and maculopathy. The data of other parameters was retrieved from patient’s record. Bi-variate and multivariate analysis was used toelucidate the factors associated with DR. Results: Any DR was detected in 23.7% (95% CI=21 to 27% of the patients and 3.2% had proliferative DR. The risk factors associated with any DR was duration of DM (OR =2.5, CI=1.6 to 3.9 for duration of five to 10 years when compared to <5 yearsand lower BMI (OR=1.8, CI=1.1 to 3.0. Moderate visual loss was associated with DR (OR=2.1, CI=1.2 to 3.7. Conclusions: This study confirms associations of DR with diabetic duration, body mass index and visual loss. Our data provide preliminary findings to help to improve the screening and preventive strategies of DR at the primary health care setting.

  6. Increased fundus autofluorescence and progression of geographic atrophy secondary to age-related macular degeneration. The GAIN study.

    OpenAIRE

    Biarnés Pérez, Marc, 1973-; Arias, Luis; Alonso Caballero, Jordi; García, Míriam; Hijano, Míriam; Rodríguez, Anabel; Serrano, Anna; Badal, Josep; Muhtaseb, Hussein; Verdaguer, Paula; Monés, Jordi

    2015-01-01

    PURPOSE: To define the role of increased fundus autofluorescence (FAF), a surrogate for lipofuscin content, as a risk factor for progression of geographic atrophy (GA). DESIGN: Prospective natural history cohort study, the GAIN (Characterization of geographic atrophy progression in patients with age-related macular degeneration). METHODS: setting: Single-center study conducted in Barcelona, Spain. PATIENTS: After screening of 211 patients, 109 eyes of 82 patients with GA secondary to age-rela...

  7. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  8. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  9. Selective inhibitory effects of niflumic acid on 5-HT-induced contraction of the rat isolated stomach fundus.

    Science.gov (United States)

    Scarparo, H C; Santos, G C; Leal-Cardoso, J H; Criddle, D N

    2000-06-01

    The effects of niflumic acid (NFA), an inhibitor of calcium-activated chloride currents I(Cl(Ca)), were compared with the actions of the voltage-dependent calcium channel (VDCC) blocker nifedipine on 5-hydroxtryptamine (5-HT)- and acetylcholine (ACh)-induced contractions of the rat isolated fundus. NFA (1 - 30 microM) elicited a concentration-dependent inhibition of contractions induced by 5-HT (10 microM) with a reduction to 15. 5+/-6.0% of the control value at 30 microM. 1 microM nifedipine reduced 5-HT-induced contraction to 15.2+/-4.9% of the control, an effect not greater in the additional presence of 30 microM NFA. In contrast, the contractile response to ACh (10 microM) was not inhibited by NFA in concentrations /=10 microM. Our results show that NFA can exert selective inhibitory effects on the chloride-dependent 5-HT-induced contractions of the rat fundus. The data support the hypothesis that activation of Cl((Ca)) channels leading to calcium entry via VDCCs is a mechanism utilized by 5-HT, but not by ACh, to elicit contraction of the rat fundus.

  10. Investigation of the potential modulatory effect of biliverdin, carbon monoxide and bilirubin on nitrergic neurotransmission in the pig gastric fundus.

    Science.gov (United States)

    Colpaert, Erwin E; Timmermans, Jean-Pierre; Lefebvre, Romain A

    2002-12-20

    In porcine gastric fundus, we have investigated the colocalization of the bile pigment biosynthetic enzymes heme oxygenase-2 and biliverdin reductase with neuronal nitric oxide synthase (nNOS), the effect of carbon monoxide (CO) on fundic circular smooth muscle and the possible modulatory effect of the bile pigments biliverdin and bilirubin on CO-mediated relaxations and on nitrergic relaxation. Heme oxygenase-2 and biliverdin reductase immunoreactivity was present in all nNOS containing myenteric neurons. CO induced a concentration-dependent relaxation of fundic circular smooth muscle strips, which was completely blocked by the specific guanylate cyclase inhibitor 1H-(1,2,4)oxadiazolo(4,3-a)quinoxalin-1-one (ODQ). 3-(5'-hydroxymethyl-2'-furyl)-1-benzylindazole (YC-1), biliverdin and bilirubin strongly enhanced the amplitude of the CO-induced relaxation. Tin protoporphyrin had no effect on electrically induced nitrergic relaxation, but spectrophotometric analysis learned that incubation of porcine gastric fundus circular muscle strips with tin protoporphyrin did not influence heme oxygenase activity. In conclusion, our data suggest that nitrergic neurons in the pig gastric fundus are able to produce biliverdin and bilirubin, and that these agents potentiate the relaxant effect of CO, which is formed concomitantly with biliverdin by heme oxygenase-2.

  11. Adaptive Neuro-Fuzzy Inference System Approach for the Automatic Screening of Diabetic Retinopathy in Fundus Images

    Directory of Open Access Journals (Sweden)

    S. Kavitha

    2011-01-01

    Full Text Available Problem statement: Diabetic retinopathy is one of the most significant factors contributing to blindness and so early diagnosis and timely treatment is particularly important to prevent visual loss. Approach: An integrated approach for extraction of blood vessels and exudates detection was proposed to screen diabetic retinopathy. An automated classifier was developed based on Adaptive Neuro-Fuzzy Inference System (ANFIS to differentiate between normal and nonproliferative eyes from the quantitative assessment of monocular fundus images. Feature extraction was performed on the preprocessed fundus images. Structure of Blood vessels was extracted using Multiscale analysis. Hard Exudates were detected using CIE Color channel transformation, Entropy Thresholding and Improved Connected Component Analysis from the fundus images. Features like Wall to Lumen ratio in blood vessels, Texture, Homogeneity properties and area occupied by Hard Exudates, were given as input to ANFIS.ANFIS was trained with Back propagation in combination with the least squares method. Proposed method was evaluated on 200 real time images comprising 70 normal and 130 retinopathic eyes. Results and Conclusion: All of the results were validated with ground truths obtained from expert ophthalmologists. Quantitative performance of the method, detected exudates with an accuracy of 99.5%. Receiver operating characteristic curve evaluated for real time images produced better results compared to the other state of the art methods. ANFIS provides best classification and can be used as a screening tool in the analysis and diagnosis of retinal images.

  12. Fundus Photography as a Screening Method for Diabetic Retinopathy in Children With Type 1 Diabetes: Outcome of the Initial Photography.

    Science.gov (United States)

    Gräsbeck, Thomas C; Gräsbeck, Sophia V; Miettinen, Päivi J; Summanen, Paula A

    2016-09-01

    To determine the success rate of the initial fundus photography session in producing gradable images for screening diabetic retinopathy in children Photography success was classified as "complete" if both images of both eyes were gradable, "partial" if both images of 1 eye were gradable, "macula-centered image(s) only" if only the macula-centered image of one or both eyes was gradable, and "unsuccessful" if neither macula-centered image was gradable. Complete success was reached in 97 (46%; 95% confidence interval [CI], 39-52) patients, at least partial success in 153 (72%; 95% CI, 65-78) patients, success of macula-centered image(s) only in 47 (22%; 95% CI, 17-28) patients, and in 13 (6%; 95%CI, 3-10) patients fundus photography was unsuccessful. Macula-centered images were more often gradable in both eyes than optic disc-centered images (P photography did not differ between right and left eye. Sex, age at diagnosis of T1D, and the duration of diabetes, age, and glycemic control at the time of initial photography were unassociated with complete success. Partial success tended to decrease with increasing age category (P = .093), and the frequency of gradable macula-centered image(s) only increased with increasing age (P = .043). Less than half of the children achieved complete success, but in only 6% initial fundus photography was unsuccessful, indicating its value in assessing retinopathy in the pediatric setting. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Fundus white spots and acquired night blindness due to vitamin A deficiency.

    Science.gov (United States)

    Genead, Mohamed A; Fishman, Gerald A; Lindeman, Martin

    2009-12-01

    To report a successfully treated case of acquired night blindness associated with fundus white spots secondary to vitamin A deficiency. An ocular examination, electrophysiologic testing, as well as visual field and OCT examinations were obtained on a 61-year-old man with vitamin A deficiency who had previously undergone gastric bypass surgery. The patient had a re-evaluation after treatment with high doses of oral vitamin A. The patient was observed to have numerous white spots in the retina of each eye. Best-corrected visual acuity was initially 20/80 in each eye, which improved to 20/40-1 OU after oral vitamin A therapy for 2 months. Full field electroretinogram (ERG) testing, showed non-detectable rod function and a 34 and 41% reduction for 32-Hz flicker and single flash cone responses, respectively, below the lower limits of normal. Both rod and cone functions markedly improved after initiation of vitamin A therapy. Vitamin A deficiency needs to be considered in a patient with white spots of the retina in the presence of poor night vision.

  14. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.

  15. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  16. Decision support system for the detection and grading of hard exudates from color fundus photographs.

    Science.gov (United States)

    Jaafar, Hussain F; Nandi, Asoke K; Al-Nuaimy, Waleed

    2011-11-01

    Diabetic retinopathy is a major cause of blindness, and its earliest signs include damage to the blood vessels and the formation of lesions in the retina. Automated detection and grading of hard exudates from the color fundus image is a critical step in the automated screening system for diabetic retinopathy. We propose novel methods for the detection and grading of hard exudates and the main retinal structures. For exudate detection, a novel approach based on coarse-to-fine strategy and a new image-splitting method are proposed with overall sensitivity of 93.2% and positive predictive value of 83.7% at the pixel level. The average sensitivity of the blood vessel detection is 85%, and the success rate of fovea localization is 100%. For exudate grading, a polar fovea coordinate system is adopted in accordance with medical criteria. Because of its competitive performance and ability to deal efficiently with images of variable quality, the proposed technique offers promising and efficient performance as part of an automated screening system for diabetic retinopathy.

  17. Computer Aided Diagnosis of Macular Edema Using Color Fundus Images: A Review

    Directory of Open Access Journals (Sweden)

    Devashree R. Zinjarde,

    2014-03-01

    Full Text Available Diabetic retinopathy is the leading cause of blindness in the western working age population and micro aneurysms are one of the first pathologies associated with diabetic retinopathy. Diabetic retinopathy (DR is caused by damage to the blood vessels of the retina which affects the vision. But when DR becomes severe it results into macular edema. Macula is the region near the centre of the eye that provides the vision. Blood vessels leak fluid onto the macula leading to the swelling which blurs the vision eventually leading to complete loss of vision. This paper is based on the detection of the edema affected image from the normal image. If the image is edema affected it also states its severity of the disease using a rotational asymmetry metric by examining the symmetry of the macular region. Diabetic macular edema (DME is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images.

  18. Optic Disc Segmentation by Balloon Snake with Texture from Color Fundus Image

    Directory of Open Access Journals (Sweden)

    Jinyang Sun

    2015-01-01

    Full Text Available A well-established method for diagnosis of glaucoma is the examination of the optic nerve head based on fundus image as glaucomatous patients tend to have larger cup-to-disc ratios. The difficulty of optic segmentation is due to the fuzzy boundaries and peripapillary atrophy (PPA. In this paper a novel method for optic nerve head segmentation is proposed. It uses template matching to find the region of interest (ROI. The method of vessel erasing in the ROI is based on PDE inpainting which will make the boundary smoother. A novel optic disc segmentation approach using image texture is explored in this paper. A cluster method based on image texture is employed before the optic disc segmentation step to remove the edge noise such as cup boundary and vessels. We replace image force in the snake with image texture and the initial contour of the balloon snake is inside the optic disc to avoid the PPA. The experimental results show the superior performance of the proposed method when compared to some traditional segmentation approaches. An average segmentation dice coefficient of 94% has been obtained.

  19. Microaneurysms detection with the radon cliff operator in retinal fundus images

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2010-01-01

    Diabetic Retinopathy (DR) is one of the leading causes of blindness in the industrialized world. Early detection is the key in providing effective treatment. However, the current number of trained eye care specialists is inadequate to screen the increasing number of diabetic patients. In recent years, automated and semi-automated systems to detect DR with color fundus images have been developed with encouraging, but not fully satisfactory results. In this study we present the initial results of a new technique for the detection and localization of microaneurysms, an early sign of DR. The algorithm is based on three steps: candidates selection, the actual microaneurysms detection and a final probability evaluation. We introduce the new Radon Cliff operator which is our main contribution to the field. Making use of the Radon transform, the operator is able to detect single noisy Gaussian-like circular structures regardless of their size or strength. The advantages over existing microaneurysms detectors are manifold: the size of the lesions can be unknown, it automatically distinguishes lesions from the vasculature and it provides a fair approach to microaneurysm localization even without post-processing the candidates with machine learning techniques, facilitating the training phase. The algorithm is evaluated on a publicly available dataset from the Retinopathy Online Challenge.

  20. Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion.

    Science.gov (United States)

    Prentašić, Pavle; Lončarić, Sven

    2016-12-01

    Diabetic retinopathy is one of the leading disabling chronic diseases and one of the leading causes of preventable blindness in developed world. Early diagnosis of diabetic retinopathy enables timely treatment and in order to achieve it a major effort will have to be invested into automated population screening programs. Detection of exudates in color fundus photographs is very important for early diagnosis of diabetic retinopathy. We use deep convolutional neural networks for exudate detection. In order to incorporate high level anatomical knowledge about potential exudate locations, output of the convolutional neural network is combined with the output of the optic disc detection and vessel detection procedures. In the validation step using a manually segmented image database we obtain a maximum F1 measure of 0.78. As manually segmenting and counting exudate areas is a tedious task, having a reliable automated output, such as automated segmentation using convolutional neural networks in combination with other landmark detectors, is an important step in creating automated screening programs for early detection of diabetic retinopathy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. 5-Hydroxytryptamino-induced calcium sparks in cultured rat stomach fundus smooth muscle cells

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Xiaoling; (张小玲); YAN; Hongtao; (阎宏涛); YAN; Yang; (闫炀)

    2003-01-01

    With a new fluorescence probe of Ca2+, STDIn-AM, 5-hydroxytryptamino (5-HT)-induced spontaneous calcium release events (calcium sparks) in cultured rat stomach fundus smooth muscle cells (SFSMC) are investigated by laser scanning confocal microscope. The mechanisms of initiation of Ca2+ sparks, propagating Ca2+ waves and their relation to E-C coupling are discussed. After the extracellular [Ca2+] is increased to 10 mmol/L, addition of 5-HT causes hot spots throughout the cytoplasm, which is brighter near the plasmalemma. The amplitude of the event is at least two times greater than the standard deviation of fluorescence intensity fluctuations measured in the neighboring region and the duration of the Ca2+ signal is over 100 ms. The results suggest that 5-HT acts by the way of 5-HT2 receptors on SFSMC, then through 5-HT2 receptors couples IP3/Ca2+ and DG/PKC double signal transduction pathways to cause Ca2+ release from intracellular Ca2+ stores and followed Ca2+ influx possibly through calcium release-activated calcium influx. The acceptor of activated 5-HT2 can also cause membrane depolarization, which then stimulates the L-type Ca2+ channels leading to Ca2+ influx. Thenthe local Ca2+ entry mentioned above activates ryanodine-sensitive Ca2+ releasechannels (RyR) on sarcoplasmic reticulum (SR) to cause local Ca2+ release events (Ca2+ sparks) through calcium-induced calcium release (CICR).

  2. Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma

    Science.gov (United States)

    Muramatsu, Chisako; Hayashi, Yoshinori; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

    2010-01-01

    Retinal nerve fiber layer defect (NFLD) is a major sign of glaucoma, which is the second leading cause of blindness in the world. Early detection of NFLDs is critical for improved prognosis of this progressive, blinding disease. We have investigated a computerized scheme for detection of NFLDs on retinal fundus images. In this study, 162 images, including 81 images with 99 NFLDs, were used. After major blood vessels were removed, the images were transformed so that the curved paths of retinal nerves become approximately straight on the basis of ellipses, and the Gabor filters were applied for enhancement of NFLDs. Bandlike regions darker than the surrounding pixels were detected as candidates of NFLDs. For each candidate, image features were determined and the likelihood of a true NFLD was determined by using the linear discriminant analysis and an artificial neural network (ANN). The sensitivity for detecting the NFLDs was 91% at 1.0 false positive per image by using the ANN. The proposed computerized system for the detection of NFLDs can be useful to physicians in the diagnosis of glaucoma in a mass screening.

  3. Fundus auto fluorescence and spectral domain ocular coherence tomography in the early detection of chloroquine retinopathy

    Directory of Open Access Journals (Sweden)

    Megan B. Goodman

    2015-03-01

    Full Text Available Purpose: To determine the sensitivity of spectral domain ocular coherence tomography (SD-OCT and fundus auto fluorescence (FAF images as a screening test to detect early changes in the retina prior to the onset of chloroquine retinopathy.Method: The study was conducted using patients taking chloroquine (CQ, referred by the Rheumatology Department to the Ophthalmology Department at Tygerberg Academic Hospital. Group A consisted of 59 patients on CQ for less than 5 years, and Group B consisted of 53 patients on CQ for more than 5 years. A 200 × 200 macula thickness map, 5-line raster SD-OCT on a Carl Zeiss Meditec Cirrus HD-OCT and FAF images on a Carl Zeiss Meditec Visucam 500 were recorded for 223 eyes. Images were reviewed independently, and then those of Groups A and B compared.Results: There were no statistically significant differences between Groups A and B. The criteria included the internal limiting membrane and the retinal pigment epithelium (ILM-RPE thickness, interdigitation zone integrity (p = 0.891, df = 1, χ² = 0.1876, ellipsoid zone integrity (p = 0.095, df = 2, χ² = 4.699 and FAF image irregularities (p = 0.479, df = 1, χ²= 4995978.Conclusion: The inclusion of SD-OCT and FAF as objective tests into the prescribed screening guidelines does not appear to simplify the detection of subclinical injury in patients on chloroquine treatment.

  4. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  5. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  6. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  7. Mirrored Light Field Video Camera Adapter

    OpenAIRE

    Tsai, Dorian; Dansereau, Donald G.; Martin, Steve; Corke, Peter

    2016-01-01

    This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirror-based light field camera adapter in preparation ...

  8. Automated Placement of Multiple Stereo Cameras

    OpenAIRE

    Malik, Rahul; Bajcsy, Peter

    2008-01-01

    International audience; This paper presents a simulation framework for multiple stereo camera placement. Multiple stereo camera systems are becoming increasingly popular these days. Applications of multiple stereo camera systems such as tele-immersive systems enable cloning of dynamic scenes in real-time and delivering 3D information from multiple geographic locations to everyone for viewing it in virtual (immersive) 3D spaces. In order to make such multi stereo camera systems ubiquitous, sol...

  9. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  10. An analysis of surgical anatomy of the gastric fundus in bariatric surgery: why the gastric pouch expands? A point of technique.

    Science.gov (United States)

    Kassir, Radwan; Blanc, Pierre; Lointier, Patrice; Tiffet, Olivier; Breton, Christophe; Ben Amor, Imed; Iannelli, Antonio; Gugenheim, Jean

    2014-11-01

    In bariatric surgery, it is essential to completely release the Fundus in order to create a narrow gastric pouch. The upper part of the fundus is located above the omental bursa and is therefore retro-peritoneal. In order to release this completely, not only does the arterial supply to the fundus need to be divided to visualise the left diaphragmatic pillar, but the right attachment beginning at the left diaphragmatic pillar and running towards the fundus needs to be divided. This minimal dissection is compensated by further dissection at the level of the left diaphragmatic pillar and traction on the stomach from right to left during the final division stapling division process. The surgeon still has the impression of having released the posterior aspect of the Fundus, exposing the pillar of the diaphragm, although in fact part of the Fundus still remains adherent to the diaphragm and is therefore not released. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  11. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  12. SPEIR: A Ge Compton Camera

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  13. Automatic tracking sensor camera system

    Science.gov (United States)

    Tsuda, Takao; Kato, Daiichiro; Ishikawa, Akio; Inoue, Seiki

    2001-04-01

    We are developing a sensor camera system for automatically tracking and determining the positions of subjects moving in three-dimensions. The system is intended to operate even within areas as large as soccer fields. The system measures the 3D coordinates of the object while driving the pan and tilt movements of camera heads, and the degree of zoom of the lenses. Its principal feature is that it automatically zooms in as the object moves farther away and out as the object moves closer. This maintains the area of the object as a fixed position of the image. This feature makes stable detection by the image processing possible. We are planning to use the system to detect the position of a soccer ball during a soccer game. In this paper, we describe the configuration of the developing automatic tracking sensor camera system. We then give an analysis of the movements of the ball within images of games, the results of experiments on method of image processing used to detect the ball, and the results of other experiments to verify the accuracy of an experimental system. These results show that the system is sufficiently accurate in terms of obtaining positions in three-dimensions.

  14. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  15. Image Based Camera Localization: an Overview

    OpenAIRE

    Wu, Yihong

    2016-01-01

    Recently, virtual reality, augmented reality, robotics, self-driving cars et al attractive much attention of industrial community, in which image based camera localization is a key task. It is urgent to give an overview of image based camera localization. In this paper, an overview of image based camera localization is presented. It will be useful to not only researchers but also engineers.

  16. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  17. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  18. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  19. Comparison of optic area measurement using fundus photography and optical coherence tomography between optic nerve head drusen and control subjects.

    Science.gov (United States)

    Flores-Rodríguez, Patricia; Gili, Pablo; Martín-Ríos, María Dolores; Grifol-Clar, Eulalia

    2013-03-01

    To compare optic disc area measurement between optic nerve head drusen (ONHD) and control subjects using fundus photography, time-domain optical coherence tomography (TD-OCT) and spectral-domain optical coherence tomography (SD-OCT). We also made a comparison between each of the three techniques. We performed our study on 66 eyes (66 patients) with ONHD and 70 healthy control subjects (70 controls) with colour ocular fundus photography at 20º (Zeiss FF 450 IR plus), TD-OCT (Stratus OCT) with the Fast Optic Disc protocol and SD-OCT (Cirrus OCT) with the Optic Disc Cube 200 × 200 protocol for measurement of the optic disc area. The measurements were made by two observers and in each measurement a correction of the image magnification factor was performed. Measurement comparison using the Student's t-test/Mann-Whitney U test, the intraclass correlation coefficient, Pearson/Spearman rank correlation coefficient and the Bland-Altman plot was performed in the statistical analysis. Mean and standard deviation (SD) of the optic disc area in ONHD and in controls was 2.38 (0.54) mm(2) and 2.54 (0.42) mm(2), respectively with fundus photography; 2.01 (0.56) mm(2) and 1.66 (0.37) mm(2), respectively with TD-OCT, and 2.03 (0.49) mm(2) and 1.75 (0.38) mm(2), respectively with SD-OCT. In ONHD and controls, repeatability of optic disc area measurement was excellent with fundus photography and optical coherence tomography (TD-OCT and SD-OCT), but with a low degree of agreement between both techniques. Optic disc area measurement is smaller in ONHD compared to healthy subjects with fundus photography, unlike time-domain and spectral-domain optical coherence tomography in which the reverse is true. Both techniques offer good repeatability, but a low degree of correlation and agreement, which means that optic disc area measurement is not interchangeable or comparable between techniques. Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  20. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  1. Spectrometry with consumer-quality CMOS cameras.

    Science.gov (United States)

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  2. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  3. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    OpenAIRE

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P. T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short conf...

  4. Automatic calibration method for plenoptic camera

    Science.gov (United States)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  5. Task analysis of laparoscopic camera control schemes.

    Science.gov (United States)

    Ellis, R Darin; Munaco, Anthony J; Reisner, Luke A; Klein, Michael D; Composto, Anthony M; Pandya, Abhilash K; King, Brady W

    2016-12-01

    Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  7. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  8. Simultaneous macula detection and optic disc boundary segmentation in retinal fundus images

    Science.gov (United States)

    Girard, Fantin; Kavalec, Conrad; Grenier, Sébastien; Ben Tahar, Houssem; Cheriet, Farida

    2016-03-01

    The optic disc (OD) and the macula are important structures in automatic diagnosis of most retinal diseases inducing vision defects such as glaucoma, diabetic or hypertensive retinopathy and age-related macular degeneration. We propose a new method to detect simultaneously the macula and the OD boundary. First, the color fundus images are processed to compute several maps highlighting the different anatomical structures such as vessels, the macula and the OD. Then, macula candidates and OD candidates are found simultaneously and independently using seed detectors identified on the corresponding maps. After selecting a set of macula/OD pairs, the top candidates are sent to the OD segmentation method. The segmentation method is based on local K-means applied to color coordinates in polar space followed by a polynomial fitting regularization step. Pair scores are updated, resulting in the final best macula/OD pair. The method was evaluated on two public image databases: ONHSD and MESSIDOR. The results show an overlapping area of 0.84 on ONHSD and 0.90 on MESSIDOR, which is better than recent state of the art methods. Our segmentation method is robust to contrast and illumination problems and outputs the exact boundary of the OD, not just a circular or elliptical model. The macula detection has an accuracy of 94%, which again outperforms other macula detection methods. This shows that combining the OD and macula detections improves the overall accuracy. The computation time for the whole process is 6.4 seconds, which is faster than other methods in the literature.

  9. Influence of bilirubin and other antioxidants on nitrergic relaxation in the pig gastric fundus.

    Science.gov (United States)

    Colpaert, E E; Lefebvre, R A

    2000-03-01

    1. The influence of several antioxidants (bilirubin, urate, ascorbate, alpha-tocopherol, glutathione (GSH), Cu/Zn superoxide dismutase (SOD) and the manganese SOD mimic EUK-8) on nitrergic relaxations induced by either exogenous nitric oxide (NO; 10(-5) M) or electrical field stimulation (4 Hz; 10 s and 3 min) was studied in the pig gastric fundus. 2. Ascorbate (5x10(-4) M), alpha-tocopherol (4x10(-4) M), SOD (300 - 1000 u ml(-1)) and EUK-8 (3x10(-4) M) did not influence the relaxations to exogenous NO. In the presence of GSH (5x10(-4) M), the short-lasting relaxation to NO became biphasic, potentiated and prolonged. Urate (4x10(-4) M) and bilirubin (2x10(-4) M) also potentiated the relaxant effect of NO. None of the antioxidants influenced the electrically evoked relaxations. 3. 6-Anilino-5,8-quinolinedione (LY83583; 10(-5) M) had no influence on nitrergic nerve stimulation but nearly abolished the relaxant response to exogenous NO. Urate and GSH completely prevented this inhibitory effect, while it was partially reversed by SOD and bilirubin. Ascorbate, alpha-tocopherol and EUK-8 were without effect. 4. Hydroquinone (10(-4) M) did not affect the electrically induced nitrergic relaxations, but markedly reduced NO-induced relaxations. The inhibition of exogenous NO by hydroquinone was completely prevented by urate and GSH. SOD and ascorbate afforded partial protection, while bilirubin, EUK-8 and alpha-tocopherol were ineffective. 5. Hydroxocobalamin (10(-4) M) inhibited relaxations to NO by 50%, but not the electrically induced responses. Full protection versus this inhibitory effect was obtained with urate, GSH and alpha-tocopherol. 6. These results strengthen the hypothesis that several endogenous antioxidant defense mechanisms, enzymatic as well as non-enzymatic, might play a role in the nitrergic neurotransmission process.

  10. Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs.

    Science.gov (United States)

    Niemeijer, Meindert; van Ginneken, Bram; Cree, Michael J; Mizutani, Atsushi; Quellec, Gwénolé; Sanchez, Clara I; Zhang, Bob; Hornero, Roberto; Lamard, Mathieu; Muramatsu, Chisako; Wu, Xiangqian; Cazuguel, Guy; You, Jane; Mayo, Agustín; Li, Qin; Hatanaka, Yuji; Cochener, Béatrice; Roux, Christian; Karray, Fakhri; Garcia, María; Fujita, Hiroshi; Abramoff, Michael D

    2010-01-01

    The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each other on the same data. In this work we present the results of the first international microaneurysm detection competition, organized in the context of the Retinopathy Online Challenge (ROC), a multiyear online competition for various aspects of DR detection. For this competition, we compare the results of five different methods, produced by five different teams of researchers on the same set of data. The evaluation was performed in a uniform manner using an algorithm presented in this work. The set of data used for the competition consisted of 50 training images with available reference standard and 50 test images where the reference standard was withheld by the organizers (M. Niemeijer, B. van Ginneken, and M. D. Abràmoff). The results obtained on the test data was submitted through a website after which standardized evaluation software was used to determine the performance of each of the methods. A human expert detected microaneurysms in the test set to allow comparison with the performance of the automatic methods. The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert. There is room for improvement as the best performing system does not reach the performance of the human expert. The data associated with the ROC microaneurysm detection competition will remain publicly available and the website will continue accepting submissions.

  11. Detection of the optic disc in fundus images by combining probability models.

    Science.gov (United States)

    Harangi, Balazs; Hajdu, Andras

    2015-10-01

    In this paper, we propose a combination method for the automatic detection of the optic disc (OD) in fundus images based on ensembles of individual algorithms. We have studied and adapted some of the state-of-the-art OD detectors and finally organized them into a complex framework in order to maximize the accuracy of the localization of the OD. The detection of the OD can be considered as a single-object detection problem. This object can be localized with high accuracy by several algorithms extracting single candidates for the center of the OD and the final location can be defined using a single majority voting rule. To include more information to support the final decision, we can use member algorithms providing more candidates which can be ranked based on the confidence ordered by the algorithms. In this case, a spatial weighted graph is defined where the candidates are considered as its nodes, and the final OD position is determined in terms of finding a maximum-weighted clique. Now, we examine how to apply in our ensemble-based framework all the accessible information supplied by the member algorithms by making them return confidence values for each image pixel. These confidence values inform us about the probability that a given pixel is the center point of the object. We apply axiomatic and Bayesian approaches, as in the case of aggregation of judgments of experts in decision and risk analysis, to combine these confidence values. According to our experimental study, the accuracy of the localization of OD increases further. Besides single localization, this approach can be adapted for the precise detection of the boundary of the OD. Comparative experimental results are also given for several publicly available datasets.

  12. A longitudinal comparison of spectral-domain optical coherence tomography and fundus autofluorescence in geographic atrophy.

    Science.gov (United States)

    Simader, Christian; Sayegh, Ramzi G; Montuoro, Alessio; Azhary, Malek; Koth, Anna Lucia; Baratsits, Magdalena; Sacu, Stefan; Prünte, Christian; Kreil, David P; Schmidt-Erfurth, Ursula

    2014-09-01

    To identify reliable criteria based on spectral-domain optical coherence tomography (SD OCT) to monitor disease progression in geographic atrophy attributable to age-related macular degeneration (AMD) compared with lesion size determination based on fundus autofluorescence (FAF). Prospective longitudinal observational study. setting: Institutional. study population: A total of 48 eyes in 24 patients with geographic atrophy. observation procedures: Eyes with geographic atrophy were included and examined at baseline and at months 3, 6, 9, and 12. At each study visit best-corrected visual acuity (BCVA), FAF, and SD OCT imaging were performed. FAF images were analyzed using the region overlay device. Planimetric measurements in SD OCT, including alterations or loss of outer retinal layers and the RPE, as well as choroidal signal enhancement, were performed with the OCT Toolkit. main outcome measures: Areas of interest in patients with geographic atrophy measured from baseline to month 12 by SD OCT compared with the area of atrophy measured by FAF. Geographic atrophy lesion size increased from 8.88 mm² to 11.22 mm² based on quantitative FAF evaluation. Linear regression analysis demonstrated that results similar to FAF planimetry for determining lesion progression can be obtained by measuring the areas of outer plexiform layer thinning (adjusted R(2) = 0.93), external limiting membrane loss (adjusted R(2) = 0.89), or choroidal signal enhancement (R(2) = 0.93) by SD OCT. SD OCT allows morphologic markers of disease progression to be identified in geographic atrophy and may improve understanding of the pathophysiology of atrophic AMD. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G.; Hernandez, Matthias; Sadda, SriniVas R.

    2014-03-01

    Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 +/- 0.06 for one test and 0.83 +/- 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.

  14. Radiometric calibration for MWIR cameras

    Science.gov (United States)

    Yang, Hyunjin; Chun, Joohwan; Seo, Doo Chun; Yang, Jiyeon

    2012-06-01

    Korean Multi-purpose Satellite-3A (KOMPSAT-3A), which weighing about 1,000 kg is scheduled to be launched in 2013 and will be located at a sun-synchronous orbit (SSO) of 530 km in altitude. This is Korea's rst satellite to orbit with a mid-wave infrared (MWIR) image sensor, which is currently being developed at Korea Aerospace Research Institute (KARI). The missions envisioned include forest re surveillance, measurement of the ocean surface temperature, national defense and crop harvest estimate. In this paper, we shall explain the MWIR scene generation software and atmospheric compensation techniques for the infrared (IR) camera that we are currently developing. The MWIR scene generation software we have developed taking into account sky thermal emission, path emission, target emission, sky solar scattering and ground re ection based on MODTRAN data. Here, this software will be used for generating the radiation image in the satellite camera which requires an atmospheric compensation algorithm and the validation of the accuracy of the temperature which is obtained in our result. Image visibility restoration algorithm is a method for removing the eect of atmosphere between the camera and an object. This algorithm works between the satellite and the Earth, to predict object temperature noised with the Earth's atmosphere and solar radiation. Commonly, to compensate for the atmospheric eect, some softwares like MODTRAN is used for modeling the atmosphere. Our algorithm doesn't require an additional software to obtain the surface temperature. However, it needs to adjust visibility restoration parameters and the precision of the result still should be studied.

  15. Cryogenic mechanism for ISO camera

    Science.gov (United States)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  16. The Flutter Shutter Camera Simulator

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-10-01

    Full Text Available The proposed method simulates an embedded flutter shutter camera implemented either analogically or numerically, and computes its performance. The goal of the flutter shutter is to make motion blur invertible, by a "fluttering" shutter that opens and closes on a well chosen sequence of time intervals. In the simulations the motion is assumed uniform, and the user can choose its velocity. Several types of flutter shutter codes are tested and evaluated: the original ones considered by the inventors, the classic motion blur, and finally several analog or numerical optimal codes proposed recently. In all cases the exact SNR of the deconvolved result is also computed.

  17. Computational cameras: convergence of optics and processing.

    Science.gov (United States)

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  18. Light field panorama by a plenoptic camera

    Science.gov (United States)

    Xue, Zhou; Baboulaz, Loic; Prandoni, Paolo; Vetterli, Martin

    2013-03-01

    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for fine and detailed light field acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light field stitching to increase the size of the acquired light field data. We consider a simplified plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light field acquisition and stitching under two different scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light fields should be resampled to increase the spatial range and ultimately obtain a wider field of view. In the case of camera translation and rotation, the camera motion is calculated such that the light fields can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light field applications such as registration and super-resolution.

  19. A Unifying Theory for Camera Calibration.

    Science.gov (United States)

    Ramalingam, SriKumar; Sturm, Peter

    2016-07-19

    This paper proposes a unified theory for calibrating a wide variety of camera models such as pinhole, fisheye, cata-dioptric, and multi-camera networks. We model any camera as a set of image pixels and their associated camera rays in space. Every pixel measures the light traveling along a (half-) ray in 3-space, associated with that pixel. By this definition, calibration simply refers to the computation of the mapping between pixels and the associated 3D rays. Such a mapping can be computed using images of calibration grids, which are objects with known 3D geometry, taken from unknown positions. This general camera model allows to represent non-central cameras; we also consider two special subclasses, namely central and axial cameras. In a central camera, all rays intersect in a single point, whereas the rays are completely arbitrary in a non-central one. Axial cameras are an intermediate case: the camera rays intersect a single line. In this work, we show the theory for calibrating central, axial and non-central models using calibration grids, which can be either three-dimensional or planar.

  20. The Zwicky Transient Facility Camera

    Science.gov (United States)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  1. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  2. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  3. Pengaruh Senam Nifas terhadap Penurunan Tinggi Fundus Uteri pada Ibu Post Partum di RSUP DR. M. Djamil Padang

    Directory of Open Access Journals (Sweden)

    Nurniati Tianastia Rullynil

    2014-09-01

    Full Text Available AbstrakPerdarahan merupakan penyebab utama kesakitan dan kematian ibu pada masa nifas, dimana 50%-60% karena kegagalan miometrium berkontraksi secara sempurna. Salah satu asuhan untuk memaksimalkan kontraksi uterus pada masa nifas adalah dengan melaksanakan senam nifas, guna mempercepat proses involusi uteri. Tujuan penelitian ini adalah untuk mengetahui pengaruh senam nifas terhadap penurunan tinggi fundus uteri (TFU pada ibu post partum. Penelitian ini merupakan studi eksperimental dengan Post Test Only Control Group Design. Alat yang digunakan dalam penelitian berupa kaliper pelvimetri. Diberikan perlakuan senam nifas pada kelompok intervensi dan tidak senam nifas pada kelompok kontrol, kemudian dilakukan pengukuran tinggi fundus uteri hari ke-1, hari ke-3 dan hari ke-6. Data dianalisa menggunakan Uji General Linier Model (GLM. Rerata TFU hari ke-1 pada kelompok intervensi 12,37±0,72 dan 12,42±0,54 pada kelompok kontrol. Rerata TFU hari ke-3 pada kelompok intervensi 9,00±0,94 dan 9,87±0,75 pada kelompok kontrol. Sedangkan rerata TFU hari ke-6 pada kelompok intervensi 5,72±0,88 dan 7,37±0,68 pada kelompok kontrol. Terdapat perbedaan yang signifikan penurunan tinggi fundus uteri antara kedua kelompok pada hari ke-3 (p=0,00 dan hari ke 6 (p=0,00. Dari hasil penelitian dapat disimpulkan bahwa senam nifas berpengaruh terhadap penurunan tinggi fundus uteri. Penurunan tinggi fundus uteri pada kelompok intervensi lebih turun dibanding kelompok kontrol.Kata kunci: senam nifas, tinggi fundus uteri, post partumAbstractHemorrhage is a major cause of maternal morbidity and mortality in the puerperium, about 50%-60% of hemorrhage occurs due to failure of myometrium to contract completely. One care to maximaze uterine contraction during the puerperium is by implementing parturition gymnastics in order to accelarate the process of uterine involution. The purpose of this study was to determine the effect of parturition gymnastics on a decreasing of

  4. 眼底荧光血管造影的护理体会%Nursing experience of fundus fluorescein angiography

    Institute of Scientific and Technical Information of China (English)

    王宝霞

    2014-01-01

    目的:探讨眼底荧光造影检查的护理方法及意义。方法:收治眼底荧光造影检查患者178例,做好造影前的准备,造影中的配合,及造影后采取相应的护理措施,为造影检查术正常实施创造有利条件。结果:本组患者在正确的护理配合下均顺利的完成眼底荧光造影检查,未发生一例严重不良反应。结论:全面正确及时的护理配合可预防和减少不良反应的发生,并可使检查获得最佳效果。%Objective:To explore the nursing methods and significance of fundus fluorescein angiography.Methods:178 patients with fundus fluorescein angiography were selected.We made good preparation before radiography,coordination in radiography, corresponding nursing measures after radiography,in order to create favorable conditions for the implementation of the normal angiography.Results:In this group,in line with the correct nursing,all patients were successfully completed the fundus fluorescein angiography,and there was no serious adverse reaction.Conclusion:A comprehensive,correct,timely nursing cooperation can prevent and reduce the occurrence of adverse reactions,and can make the examination to obtain the best results.

  5. MAGIC-II Camera Slow Control Software

    CERN Document Server

    Steinke, B; Tridon, D Borla

    2009-01-01

    The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been extended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II. One of the major improvements of the second telescope is an improved camera. The Camera Control Program is embedded in the telescope control software as an independent subsystem. The Camera Control Program is an effective software to monitor and control the camera values and their settings and is written in the visual programming language LabVIEW. The two main parts, the Central Variables File, which stores all information of the pixel and other camera parameters, and the Comm Control Routine, which controls changes in possible settings, provide a reliable operation. A safety routine protects the camera from misuse by accidental commands, from bad weather conditions and from hardware errors by automatic reactions.

  6. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  7. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  8. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  9. Fundus fluorescein angiographic findings in patients who underwent ventricular assist device implantation.

    Science.gov (United States)

    Ozturk, Taylan; Nalcaci, Serhad; Ozturk, Pelin; Engin, Cagatay; Yagdi, Tahir; Akkin, Cezmi; Ozbaran, Mustafa

    2013-09-01

    Disruption of microcirculation in various tissues as a result of deformed blood rheology due to ventricular assist device (VAD) implantation causes novel arteriovenous malformations. Capillary disturbances and related vascular leakage in the retina and choroidea may also be seen in patients supported by VADs. We aimed to evaluate retinal vasculature deteriorations after VAD implantation. The charts of 17 patients who underwent VAD implantation surgery for the treatment of end-stage heart failure were retrospectively reviewed. Eight cases (47.1%) underwent pulsatile pump implantation (Berlin Heart EXCOR, Berlin Heart Mediprodukt GmbH, Berlin, Germany); however, nine cases (52.9%) had continuous-flow pump using centrifugal design (HeartWare, HeartWare Inc., Miramar, FL, USA). Study participants were selected among the patients who had survived with a VAD for at least 6 months, and results of detailed ophthalmologic examinations including optic coherence tomography (OCT) and fundus fluorescein angiography (FA) were documented. All of the 17 patients were male, with a mean age of 48.5 ± 14.8 years (15-67 years). Detailed ophthalmologic examinations including the evaluation of retinal vascular deteriorations via FA were performed at a mean of 11.8 ± 3.7 months of follow-up (6-18 months). Mean best-corrected visual acuity and intraocular pressure were found as logMAR 0.02 ± 0.08 and 14.6 ± 1.9 mm Hg, respectively in the study population. Dilated fundoscopy revealed severe focal arteriolar narrowing in two patients (11.8%), and arteriovenous crossing changes in four patients (23.5%); however, no pathological alteration was present in macular OCT scans. In patients with continuous-flow blood pumps, mean arm-retina circulation time (ARCT) and arteriovenous transit time (AVTT) were found to be 16.8 ± 3.0 and 12.4 ± 6.2 s, respectively; whereas those with pulsatile-flow blood pumps were found to be 17.4 ± 3.6 and 14.0 ± 2.1 s in patients (P=0.526 and P=0

  10. Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs.

    Science.gov (United States)

    Niemeijer, Meindert; Xu, Xiayu; Dumitrescu, Alina V; Gupta, Priya; van Ginneken, Bram; Folk, James C; Abramoff, Michael D

    2011-11-01

    A decreased ratio of the width of retinal arteries to veins [arteriolar-to-venular diameter ratio (AVR)], is well established as predictive of cerebral atrophy, stroke and other cardiovascular events in adults. Tortuous and dilated arteries and veins, as well as decreased AVR are also markers for plus disease in retinopathy of prematurity. This work presents an automated method to estimate the AVR in retinal color images by detecting the location of the optic disc, determining an appropriate region of interest (ROI), classifying vessels as arteries or veins, estimating vessel widths, and calculating the AVR. After vessel segmentation and vessel width determination, the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. A skeletonization operation is applied to the remaining vessels after which vessel crossings and bifurcation points are removed, leaving a set of vessel segments consisting of only vessel centerline pixels. Features are extracted from each centerline pixel in order to assign these a soft label indicating the likelihood that the pixel is part of a vein. As all centerline pixels in a connected vessel segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next, artery vein pairs are matched using an iterative algorithm, and the widths of the vessels are used to calculate the AVR. We trained and tested the algorithm on a set of 65 high resolution digital color fundus photographs using a reference standard that indicates for each major vessel in the image whether it is an artery or vein. We compared the AVR values produced by our system with those determined by a semi-automated reference system. We obtained a mean unsigned error of 0.06 (SD 0.04) in 40 images with a mean AVR of 0.67. A second observer using the semi-automated system obtained the same mean unsigned error of 0.06 (SD 0.05) on the set of images with a mean AVR of 0.66. The testing data and

  11. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  12. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  13. Intelligent thermal imaging camera with network interface

    Science.gov (United States)

    Sielewicz, Krzysztof M.; Kasprowicz, Grzegorz; Poźniak, Krzysztof T.; Romaniuk, R. S.

    2011-10-01

    In recent years, a significant increase in usage of thermal imagining cameras can be observed in both public and commercial sector, due to the lower cost and expanding availability of uncooled microbolometer infrared radiation detectors. Devices present on the market vary in their parameters and output interfaces. However, all these thermographic cameras are only a source of an image, which is then analyzed in external image processing unit. There is no possibility to run users dedicated image processing algorithms by thermal imaging camera itself. This paper presents a concept of realization, architecture and hardware implementation of "Intelligent thermal imaging camera with network interface" utilizing modern technologies, standards and approach in one single device.

  14. Omnidirectional underwater camera design and calibration.

    Science.gov (United States)

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-03-12

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  15. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  16. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  17. Gesture recognition on smart cameras

    Science.gov (United States)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  18. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  19. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  20. LROC - Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  1. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  2. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  3. Trajectory association across multiple airborne cameras.

    Science.gov (United States)

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  4. Effect of indomethacin on electrical field stimulation-induced contractions of isolated transverse and longitudinal rat gastric fundus strips

    Institute of Scientific and Technical Information of China (English)

    Salimeh Afshin; Mansoor Keshavarz; Mahmood Salami; Fatemeh Mirershadi; Bijan Djahanguiri

    2005-01-01

    AIM: To study the effects of indomethacin on the isolated transverse and longitudinal rat gastric fundus strips.METHODS: The strips were suspended in an organ bath containing oxygenated Krebs solution, and contractile responses to electrical field stimulation were recorded on a physiograph in an isotonic manner after administration of cumulative concentrations of indomethacin. The effects of indomethacin on the strips pretreated with KATP channel modulators, diazoxide and glybenclamide were studied.RESULTS: Treatment of the transverse strips with indomethacin resulted in a concentration-dependent inhibitory response. In longitudinal strips, biphasic responses were seen, which included a stimulatory response at low concentrations of indomethacin, followed by an inhibitory response at higher concentrations.Diazoxide pre-treatment inhibited the stimulatory response of longitudinal strips. Glybenclamide pre-treatment not only blocked inhibitory effect of the low concentrations of indomethacin on transverse strips, but also increased the amplitude of contractions. Moreover, the drug decreased the amplitude of contractions in longitudinal strips.CONCLUSION: Responses of the isolated longitudinal and transverse rat gastric fundus strips to indomethacin are not similar, and are influenced by KATP channel modulators.

  5. A gene for late-onset fundus flavimaculatus with macular dystrophy maps to chromosome 1p13

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, S.; Rozet, J.M.; Bonneau, D.; Souied, E.; Camuzat, A.; Munnich, A.; Kaplan, J. [Hopital des Enfants Malades, Paris (France); Dufier, J.L. [Hopital Laeennec, Paris (France); Amalric, P. [Consultation d`Ophtalmologie, Albi (France); Weissenbach, J. [Genethon, Evry (France)

    1995-02-01

    Fundus flavimaculatus with macular dystrophy is an autosomal recessive disease responsible for a progressive loss of visual acuity in adulthood, with pigmentary changes of the macula, perimacular flecks, and atrophy of the retinal pigmentary epithelium. Since this condition shares several clinical features with Stargardt disease, which has been mapped to chromosome 1p21-p13, we tested the disease for linkage to chromosome 1p. We report the mapping of the disease locus to chromosome 1p13-p21, in the genetic interval defined by loci D1S435 and D1S415, in four multiplex families (maximum lod score 4.79 at recombination fraction 0 for probe AFM217xb2 at locus D1S435). Thus, despite differences in the age at onset, clinical course, and severity, fundus flavimaculatus with macular dystrophy and Stargardt disease are probably allelic disorders. This result supports the view that allelic mutations produce a continuum of macular dystrophies, with onset in early childhood to late adulthood. 16 refs., 3 figs., 1 tab.

  6. Endomorphins 1 and 2 reduce relaxant non-adrenergic, non-cholinergic neurotransmission in rat gastric fundus.

    Science.gov (United States)

    Storr, M; Gaffal, E; Schusdziarra, V; Allescher, H-D

    2002-06-14

    It is now well established that opioids modulate cholinergic excitatory neurotransmission in the gastrointestinal tract. The aim of the present study was to characterize a possible effect of endomorphins on nonadrenergic, noncholinergic (NANC) relaxant neurotransmission in the rat gastric fundus in vitro. The drugs used in the experiments were the endogenous mu-opioid receptors (MORs) endomorphin 1 and 2 and the mu-opioid receptor antagonist CTAP (D-Phe-Cys-Tyr-D-Trp-Arg-Thr-Pen-Thr-NH2). CTAP left the basal tonus and the spontaneous activity of the preparation unchanged. Electrical field stimulation (EFS) under NANC conditions at frequencies ranging from 0.5 to 16 Hz caused a frequency-dependent relaxant response on the 5-hydoxytryptamine (5-HT) (10(-7) M) precontracted smooth-muscle strip. Both endomorphin 1 and endomorphin 2 significantly reduced this relaxation in a concentration-dependent manner. Endomorphin 1 proved to be more potent in reducing the relaxant responses. The endomorphin effects were significantly reversed by the MOR antagonist CTAP. CTAP itself did not influence the EFS-induced relaxation. In summary, these data provide evidence that the endogenous MOR agonists endomorphin 1 and 2 can reduce nonadrenergic, noncholinergic neurotransmission in the rat gastric fundus smooth muscle via a pathway involving MORs. The physiological relevance of these findings remains to be established, since the data presented suggest that the endomorphins act as neuromodulators within NANC relaxant neurotransmission.

  7. Detection of Hard Exudates in Colour Fundus Images Using Fuzzy Support Vector Machine-Based Expert System.

    Science.gov (United States)

    Jaya, T; Dheeba, J; Singh, N Albert

    2015-12-01

    Diabetic retinopathy is a major cause of vision loss in diabetic patients. Currently, there is a need for making decisions using intelligent computer algorithms when screening a large volume of data. This paper presents an expert decision-making system designed using a fuzzy support vector machine (FSVM) classifier to detect hard exudates in fundus images. The optic discs in the colour fundus images are segmented to avoid false alarms using morphological operations and based on circular Hough transform. To discriminate between the exudates and the non-exudates pixels, colour and texture features are extracted from the images. These features are given as input to the FSVM classifier. The classifier analysed 200 retinal images collected from diabetic retinopathy screening programmes. The tests made on the retinal images show that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and features sets, the area under the receiver operating characteristic curve reached 0.9606, which corresponds to a sensitivity of 94.1% with a specificity of 90.0%. The results suggest that detecting hard exudates using FSVM contribute to computer-assisted detection of diabetic retinopathy and as a decision support system for ophthalmologists.

  8. Changes of fundus blood flow state of patients with open-angle glaucoma before and after the treatment

    Institute of Scientific and Technical Information of China (English)

    Huai-Jie Huang; Mei-Min Niu; Yi Yang; Ke-Qin Li

    2016-01-01

    Objective:To To study and observe the change situation of fundus blood flow state of patients with open-angle glaucoma before and after the treatment.Methods:A total of 60 patients with open-angle glaucoma treated in our hospital from October 2013 to May 2015 were selected as the observation group, and 60 healthy persons with physical examination at the same period were the control group, then the RI, PI, PSV, EDV and VM levels of ocular artery, central retinal artery and posterior ciliary artery of observation group before the treatment and at 2th, 4th, 8th and 12th week after the treatment and control group were compared.Results:The RI and PI levels of ocular artery, central retinal artery and posterior ciliary artery of observation group before the treatment and at 2th, 4th and 8th week after the treatment were higher than those of control group, while the PSV, EDV and VM levels of ocular artery, central retinal artery and posterior ciliary artery were lower than those of control group. The detection results of observation group at different time after the treatment were better than those before the treatment, and there were significant differences (P0.05).Conclusions:The changes of fundus blood flow state of patients with open-angle glaucoma before and after the treatment are great, and the various artery blood flow presents continuous improvement.

  9. Clinical Analysis of Pregnancy-induced Hypertension Disease Fundus Lesions%妊娠高血压疾病眼底病变临床分析

    Institute of Scientific and Technical Information of China (English)

    周秋云

    2015-01-01

    目的:分析妊娠高血压疾病眼底病变检查重要作用。方法选取从2014年7月~2015年7月收治的60例妊娠高血压综合征,分析其眼底病变与妊娠高血压关系。结果本组60例患者中,56例眼底病变,占93.33%;4例眼底正常,占6.67%。妊娠高血压综合征病程越长发生视网膜病变概率也就越高,17例病程>31 d,16例眼底病变(94.12%)。结论眼底检查操作方便,可为妊娠高血压综合征诊断提供参考依据。%Objective To analyze the pregnancy hypertension disease check important role in fundus lesions. Methods From July 2014 to July 2015, 60 cases of pregnancy-induced hypertension syndrome were analyze its fundus lesions relationship with gestational hypertension. Results 60 cases patients, 56 cases of fundus lesions, accounted for 93.33%; Normal fundus, 4 cases (6.67%). Pregnancy-induced hypertension syndrome, the longer the duration the higher the occurrence probability of retinopathy, 17 cases of course > 31 d, 16 cases of fundus lesions (94.12%). Conclusion The fundus examination is easy to operate, can provide the reference for diagnosis of pregnancy-induced hypertension syndrome.

  10. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  11. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  12. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  13. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    Science.gov (United States)

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  14. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  15. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  16. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  17. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  18. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  19. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  20. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  1. New camera tube improves ultrasonic inspection system

    Science.gov (United States)

    Berger, H.; Collis, W. J.; Jacobs, J. E.

    1968-01-01

    Electron multiplier, incorporated into the camera tube of an ultrasonic imaging system, improves resolution, effectively shields low level circuits, and provides a high level signal input to the television camera. It is effective for inspection of metallic materials for bonds, voids, and homogeneity.

  2. Thermal Cameras in School Laboratory Activities

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…

  3. Optimal Camera Placement for Motion Capture Systems.

    Science.gov (United States)

    Rahimian, Pooya; Kearney, Joseph K

    2017-03-01

    Optical motion capture is based on estimating the three-dimensional positions of markers by triangulation from multiple cameras. Successful performance depends on points being visible from at least two cameras and on the accuracy of the triangulation. Triangulation accuracy is strongly related to the positions and orientations of the cameras. Thus, the configuration of the camera network has a critical impact on performance. A poor camera configuration may result in a low quality three-dimensional (3D) estimation and consequently low quality of tracking. This paper introduces and compares two methods for camera placement. The first method is based on a metric that computes target point visibility in the presence of dynamic occlusion from cameras with "good" views. The second method is based on the distribution of views of target points. Efficient algorithms, based on simulated annealing, are introduced for estimating the optimal configuration of cameras for the two metrics and a given distribution of target points. The accuracy and robustness of the algorithms are evaluated through both simulation and empirical measurement. Implementations of the two methods are available for download as tools for the community.

  4. AIM: Ames Imaging Module Spacecraft Camera

    Science.gov (United States)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  5. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  6. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  7. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  8. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  9. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  10. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Dimmeler, A.; Eberle, B; Heuvel, J.C. van den; Mieremet, A.L.; Bekman, H.H.P.T.; Mellier, B.

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of dazzling experiments performed with MWIR lasers. In the “low energy” pulse regime we observe an increasing saturated area with increasing power. The si

  11. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  12. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  13. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...... the viewpoint movements to the player type and her game-play style. Ultimately, the methodology is applied to a 3D platform game and is evaluated through a controlled experiment; the results suggest that the resulting adaptive cinematographic experience is favoured by some player types and it can generate...

  14. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  15. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  16. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  17. Practical intraoperative stereo camera calibration.

    Science.gov (United States)

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  18. Cloud Computing with Context Cameras

    CERN Document Server

    Pickles, A J

    2013-01-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every 2 minutes through BVriz filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of 0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-comp...

  19. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  20. Automatic camera tracking for remote manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2/sup 0/ deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  1. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  2. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  3. Camera calibration for multidirectional flame chemiluminescence tomography

    Science.gov (United States)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  4. Electronic cameras for low-light microscopy.

    Science.gov (United States)

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels.

  5. Multi-digital Still Cameras with CCD

    Institute of Scientific and Technical Information of China (English)

    LIU Wen-jing; LONG Zai-chuan; XIONG Ping; HUAN Yao-xiong

    2006-01-01

    Digital still camera is a completely typical tool for capturing the digital images. With the development of IC technology and optimization-algorithm, the performance of digital still cameras(DSCs) will be more and more powerful in the world. But can we obtain the more and better info using the combined information from the multi-digital still camera? The answer is yes by some experiments. By using multi-DSC at different angles, the various 3-D informations of the object are obtained.

  6. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  7. Intelligent Camera for Surface Defect Inspection

    Institute of Scientific and Technical Information of China (English)

    CHENG Wan-sheng; ZHAO Jie; WANG Ke-cheng

    2007-01-01

    An intelligent camera for surface defect inspection is presented which can pre-process the surface image of a rolled strip and pick defective areas out at a spead of 1600 meters per minute. The camera is made up of a high speed line CCD, a 60Mb/s CCD digitizer with correlated double sampling function, and a field programmable gate array(FPGA), which can quickly distinguish defective areas using a perceptron embedded in FPGA thus the data to be further processed would dramatically be reduced. Some experiments show that the camera can meet high producing speed, and reduce cost and complexity of automation surface inspection systems.

  8. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  9. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  10. Automated Detection and Differentiation of Drusen, Exudates, and Cotton-Wool Spots in Digital Color Fundus Photographs for Diabetic Retinopathy Diagnosis

    NARCIS (Netherlands)

    Niemeijer, M.; van Ginneken, B.; Russel, S.R.; Suttorp-Schulten, M.S.A.; Abràmoff, M.D.

    2007-01-01

    purpose. To describe and evaluate a machine learning-based, automated system to detect exudates and cotton-wool spots in digital color fundus photographs and differentiate them from drusen, for early diagnosis of diabetic retinopathy. methods. Three hundred retinal images from one eye of 300

  11. Harvesting the weak angular reflections from the fundus of the human eye : on measuring and analyzing the light wasted by the retina

    NARCIS (Netherlands)

    Kraats, J. van der

    2007-01-01

    Summary of the thesis “Harvesting the weak angular reflections from the fundus of the human eye” by Jan van de Kraats University Medical Centre Utrecht. Defended October 16, 2007. This thesis is on the modeling of the optical reflection of the human fovea, and on the three instruments build for

  12. Towards Adaptive Virtual Camera Control In Computer Games

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platf...

  13. Correlation between peripapillary retinal nerve fiber layer thickness and fundus autofluorescence in primary open-angle glaucoma

    Directory of Open Access Journals (Sweden)

    Reznicek L

    2013-09-01

    Full Text Available Lukas Reznicek,* Florian Seidensticker,* Thomas Mann, Irene Hübert, Alexandra Buerger, Christos Haritoglou, Aljoscha S Neubauer, Anselm Kampik, Christoph Hirneiss, Marcus Kernt Department of Ophthalmology, Ludwig-Maximilians-University, Munich, Germany *These authors contributed equally to this work Purpose: To investigate the relationship between retinal nerve fiber layer (RNFL thickness and retinal pigment epithelium alterations in patients with advanced glaucomatous visual field defects. Methods: A consecutive, prospective series of 82 study eyes with primary open-angle glaucoma and advanced glaucomatous visual field defects were included in this study. All study participants underwent a full ophthalmic examination followed by visual field testing with standard automated perimetry as well as spectral-domain optical coherence tomography (SD-OCT for peripapillary RNFL thickness and Optos wide-field fundus autofluorescence (FAF images. A pattern grid with corresponding locations between functional visual field sectors and structural peripapillary RNFL thickness was aligned to the FAF images at corresponding location. Mean FAF intensity (range: 0 = black and 255 = white of each evaluated sector (superotemporal, temporal, inferotemporal, inferonasal, nasal, superonasal was correlated with the corresponding peripapillary RNFL thickness obtained with SD-OCT. Results: Correlation analyses between sectoral RNFL thickness and standardized FAF intensity in the corresponding topographic retina segments revealed partly significant correlations with correlation coefficients ranging between 0.004 and 0.376 and were statistically significant in the temporal inferior central field (r = 0.324, P = 0.036 and the nasal field (r = 0.376, P = 0.014. Conclusion: Retinal pigment epithelium abnormalities correlate with corresponding peripapillary RNFL damage, especially in the temporal inferior sector of patients with advanced glaucomatous visual field defects. A

  14. Change in Drusen Area Over Time Compared Using Spectral-Domain Optical Coherence Tomography and Color Fundus Imaging

    Science.gov (United States)

    Gregori, Giovanni; Yehoshua, Zohar; Garcia Filho, Carlos Alexandre de Amorim; Sadda, SriniVas R.; Portella Nunes, Renata; Feuer, William J.; Rosenfeld, Philip J.

    2014-01-01

    Purpose. To investigate the relationship between drusen areas measured with color fundus images (CFIs) and those with spectral-domain optical coherence tomography (SDOCT). Methods. Forty-two eyes from thirty patients with drusen in the absence of geographic atrophy were recruited to a prospective study. Digital color fundus images and SDOCT images were obtained at baseline and at follow-up visits at 3 and 6 months. Registered, matched circles centered on the fovea with diameters of 3 mm and 5 mm were identified on both CFIs and SDOCT images. Spectral-domain OCT drusen measurements were obtained using a commercially available proprietary algorithm. Drusen boundaries on CFIs were traced manually at the Doheny Eye Institute Image Reading Center. Results. Mean square root drusen area (SQDA) measurements for the 3-mm circles on the SDOCT images were 1.451 mm at baseline and 1.464 mm at week 26, whereas the measurements on CFIs were 1.555 mm at baseline and 1.584 mm at week 26. Mean SQDA measurements from CFIs were larger than those from the SDOCT measurements at all time points (P = 0.004 at baseline, P = 0.003 at 26 weeks). Changes in SQDA over 26 weeks measured with SDOCT were not different from those measured with CFIs (mean difference = 0.014 mm, P = 0.5). Conclusions. Spectral-domain OCT drusen area measurements were smaller than the measurements obtained from CFIs. However, there were no differences in the change in drusen area over time between the two imaging modalities. Spectral-domain OCT measurements were considerably more sensitive in assessing drusen area changes. PMID:25335982

  15. Analysis of fundus shape in highly myopic eyes by using curvature maps constructed from optical coherence tomography.

    Directory of Open Access Journals (Sweden)

    Masahiro Miyake

    Full Text Available PURPOSE: To evaluate fundus shape in highly myopic eyes using color maps created through optical coherence tomography (OCT image analysis. METHODS: We retrospectively evaluated 182 highly myopic eyes from 113 patients. After obtaining 12 lines of 9-mm radial OCT scans with the fovea at the center, the Bruch's membrane line was plotted and its curvature was measured at 1-µm intervals in each image, which was reflected as a color topography map. For the quantitative analysis of the eye shape, mean absolute curvature and variance of curvature were calculated. RESULTS: The color maps allowed staphyloma visualization as a ring of green color at the edge and as that of orange-red color at the bottom. Analyses of mean and variance of curvature revealed that eyes with myopic choroidal neovascularization tended to have relatively flat posterior poles with smooth surfaces, while eyes with chorioretinal atrophy exhibited a steep, curved shape with an undulated surface (P<0.001. Furthermore, eyes with staphylomas and those without clearly differed in terms of mean curvature and the variance of curvature: 98.4% of eyes with staphylomas had mean curvature ≥7.8×10-5 [1/µm] and variance of curvature ≥0.26×10-8 [1/µm]. CONCLUSIONS: We established a novel method to analyze posterior pole shape by using OCT images to construct curvature maps. Our quantitative analysis revealed that fundus shape is associated with myopic complications. These values were also effective in distinguishing eyes with staphylomas from those without. This tool for the quantitative evaluation of eye shape should facilitate future research of myopic complications.

  16. Evidence for a modulatory role of orexin A on the nitrergic neurotransmission in the mouse gastric fundus.

    Science.gov (United States)

    Baccari, Maria Caterina; Bani, Daniele; Calamai, Franco

    2009-04-10

    The presence of orexins and their receptors in the gastrointestinal tract supports a local action of these peptides. Aim of the present study was to investigate the effects of orexin A (OXA) on the relaxant responses of the mouse gastric fundus. Mechanical responses of gastric strips were recorded via force-displacement transducers. The presence of orexin receptors (OX-1R) was also evaluated by immunocytochemistry. In carbachol precontracted strips and in the presence of guanethidine, electrical field stimulation (EFS) elicited a fast inhibitory response that may be followed, at the highest stimulation frequencies employed, by a sustained relaxation. All relaxant responses were abolished by TTX. The fast response was abolished by the nitric oxide (NO) synthesis inhibitor l-NNA (2x10(-4) M) as well as by the guanylate cyclase inhibitor ODQ (1x10(-6) M). OXA (3x10(-7) M) greatly increased the amplitude of the EFS-induced fast relaxation without affecting the sustained one. OXA also potentiated the amplitude of the relaxant responses elicited by the ganglionic stimulating agent DMPP (1x10(-5) M), but had no effects on the direct smooth muscle relaxant responses elicited by papaverine (1x10(-5) M) or VIP (1x10(-7) M). In the presence of l-NNA, the response to DMPP was reduced in amplitude and no longer influenced by OXA. The OX1 receptor antagonist SB-334867 (1x10(-5) M) reduced the amplitude of the EFS-induced fast relaxation without influencing neither the sustained responses nor those to papaverine and VIP. Immunocytochemistry showed the presence of neurons that co-express neuronal nitric oxide synthase and OX-1R. These results indicate that, in mouse gastric fundus, OXA exerts a modulatory action at the postganglionic level on the nitrergic neurotransmission.

  17. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Science.gov (United States)

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  18. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  19. POLICE BODY CAMERAS: SEEING MAY BE BELIEVING

    Directory of Open Access Journals (Sweden)

    Noel Otu

    2016-11-01

    Full Text Available While the concept of body-mounted cameras (BMC worn by police officers is a controversial issue, it is not new. Since in the early-2000s, police departments across the United States, England, Brazil, and Australia have been implementing wearable cameras. Like all devices used in policing, body-mounted cameras can create a sense of increased power, but also additional responsibilities for both the agencies and individual officers. This paper examines the public debate regarding body-mounted cameras. The conclusions drawn show that while these devices can provide information about incidents relating to police–citizen encounters, and can deter citizen and police misbehavior, these devices can also violate a citizen’s privacy rights. This paper outlines several ramifications for practice as well as implications for policy.

  20. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  1. Compact stereo endoscopic camera using microprism arrays.

    Science.gov (United States)

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  2. Planetary camera control improves microfiche production

    Science.gov (United States)

    Chesterton, W. L.; Lewis, E. B.

    1965-01-01

    Microfiche is prepared using an automatic control system for a planetary camera. The system provides blank end-of-row exposures and signals card completion so the legend of the next card may by photographed.

  3. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  4. Increase in the Array Television Camera Sensitivity

    Science.gov (United States)

    Shakhrukhanov, O. S.

    A simple adder circuit for successive television frames that enables to considerably increase the sensitivity of such radiation detectors is suggested by the example of array television camera QN902K.

  5. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  6. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  7. High-performance digital color video camera

    Science.gov (United States)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  8. Contrail study with ground-based cameras

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2013-08-01

    Full Text Available Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle −1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  9. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  10. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  11. Vacuum compatible miniature CCD camera head

    Science.gov (United States)

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  12. Selecting the Right Camera for Your Desktop.

    Science.gov (United States)

    Rhodes, John

    1997-01-01

    Provides an overview of camera options and selection criteria for desktop videoconferencing. Key factors in image quality are discussed, including lighting, resolution, and signal-to-noise ratio; and steps to improve image quality are suggested. (LRW)

  13. Camera vibration measurement using blinking light-emitting diode array.

    Science.gov (United States)

    Nishi, Kazuki; Matsuda, Yuichi

    2017-01-23

    We present a new method for measuring camera vibrations such as camera shake and shutter shock. This method successfully detects the vibration trajectory and transient waveforms from the camera image itself. We employ a time-varying pattern as the camera test chart over the conventional static pattern. This pattern is implemented using a specially developed blinking light-emitting-diode array. We describe the theoretical framework and pattern analysis of the camera image for measuring camera vibrations. Our verification experiments show that our method has a detection accuracy and sensitivity of 0.1 pixels, and is robust against image distortion. Measurement results of camera vibrations in commercial cameras are also demonstrated.

  14. 基底型胆囊腺肌增生症的影像学特征%Imaging features of fundus adenomyomatosis of the gallbladder

    Institute of Scientific and Technical Information of China (English)

    余迅; 王均庆; 于向荣; 孙屏

    2014-01-01

    Adenomyomatosis of the gallbladder (GAM) is an acquired,benign proliferative lesion of the gallbladder which is characterized by mucosal proliferation with invaginations and diverticula penetrating into the thickened muscular layer (Rokitansky-Aschoff sinuses).GAM consists of 3 types:diffuse,segmental and fundus GAM.There is no specific presentation of GAM,and computed tomography is helpful for the diagnosis of this disease.From July 2010 to May 2013,16 patients with fundus GAM were admitted to the Second People's Hospital of Wuxi.Rokitansky-Aschoff sinuses and calotte sign at the thickened muscular layer of the fundus of the gallbladder are the typical presentation of the fundus GAM.Enhanced computed tomography examination is of great importance for the diagnosis of the fundus GAM.%胆囊腺肌增生症(GAM)是发生于胆囊的良性增生性病变,是胆囊壁黏膜增生突入增厚的胆囊壁肌层,形成憩室样病变(即罗-阿氏窦),分为弥漫型、节段型和基底型3种类型.其临床表现无特异性,CT检查表现有助于该病的诊断.2010年7月至2013年5月无锡市第二人民医院收治了16例基底型GAM患者.局限性增厚的胆囊底部壁内“罗-阿氏窦”和“小帽征”是基底型GAM的CT检查的典型表现.CT增强扫描检查对基底型GAM的诊断具有重要价值.

  15. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  16. CMOS Camera Array With Onboard Memory

    Science.gov (United States)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  17. Analyzing storage media of digital camera

    OpenAIRE

    Chow, KP; Tse, KWH; Law, FYW; Ieong, RSC; Kwan, MYK; Tse, H.; Lai, PKY

    2009-01-01

    Digital photography has become popular in recent years. Photographs have become common tools for people to record every tiny parts of their daily life. By analyzing the storage media of a digital camera, crime investigators may extract a lot of useful information to reconstruct the events. In this work, we will discuss a few approaches in analyzing these kinds of storage media of digital cameras. A hypothetical crime case will be used as case study for demonstration of concepts. © 2009 IEEE.

  18. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [University of Alaska--Fairbanks; Bailey, J [University of Alaska--Fairbanks

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  19. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  20. Single camera stereo using structure from motion

    Science.gov (United States)

    McBride, Jonah; Snorrason, Magnus; Goodsell, Thomas; Eaton, Ross; Stevens, Mark R.

    2005-05-01

    Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems such as those encountered in parking lot surveillance. Stereo reconstruction is a useful technique in this domain and can be done in two ways. The first requires a fixed stereo camera rig to provide two side-by-side images; the second uses a single camera in motion to provide the images. While stereo rigs can be accurately calibrated in advance, they rely on a fixed baseline distance between the two cameras. The advantage of a single-camera method is the flexibility to change the baseline distance to best match each scenario. This directly increases the robustness of the stereo algorithm and increases the effective range of the system. The challenge comes from accurately rectifying the images into an ideal stereo pair. Structure from motion (SFM) can be used to compute the camera motion between the two images, but its accuracy is limited and small errors can cause rectified images to be misaligned. We present a single-camera stereo system that incorporates a Levenberg-Marquardt minimization of rectification parameters to bring the rectified images into alignment.

  1. A comparison of colour micrographs obtained with a charged couple devise (CCD) camera and a 35-mm camera

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Smedegaard, Jesper; Jensen, Peter Koch

    2005-01-01

    ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy......ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy...

  2. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  3. Camera Calibration Accuracy at Different Uav Flying Heights

    Science.gov (United States)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  4. How to Build Your Own Document Camera for around $100

    Science.gov (United States)

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  5. 16 CFR 1025.45 - In camera materials.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  6. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  7. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition.

  8. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  9. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  10. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  11. Camera Calibration with Radial Variance Component Estimation

    Science.gov (United States)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  12. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  13. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  14. I'm camera shy; should my practice install video surveillance cameras?

    National Research Council Canada - National Science Library

    2010-01-01

    ... the use of cameras is generally sufficient to meet legal requirements." For veterinary hospitals, concerns usually center on surveillance cameras pointed at team members, Dr. Allen says. There are two issues. First, employees may be upset by this new symbol of mistrust in their warm workplace home. To offset this reaction, it's a good idea to ease into the...

  15. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  16. Speed cameras : how they work and what effect they have.

    OpenAIRE

    2011-01-01

    Much research has been carried out into the effects of speed cameras, and the research shows consistently positive results. International review studies report that speed cameras produce a reduction of approximately 20% in personal injury crashes on road sections where cameras are used. In the Netherlands, research also indicates positive effects on speed behaviour and road safety. Dutch drivers find speed cameras in fixed pole-mounted positions more acceptable than cameras in hidden police c...

  17. Generating Stereoscopic Television Images With One Camera

    Science.gov (United States)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  18. Results of the prototype camera for FACT

    Energy Technology Data Exchange (ETDEWEB)

    Anderhub, H. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Backes, M. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Biland, A.; Boller, A.; Braun, I. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Bretz, T. [Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Commichau, S.; Commichau, V. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Dorner, D. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); INTEGRAL Science Data Center, CH-1290 Versoix (Switzerland); Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Koehne, J.-H. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Kraehenbuehl, T., E-mail: thomas.kraehenbuehl@phys.ethz.c [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Kranich, D.; Lorenz, E.; Lustermann, W. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Mannheim, K. [Universitaet Wuerzburg, D-97074 Wuerzburg (Germany)

    2011-05-21

    The maximization of the photon detection efficiency (PDE) is a key issue in the development of cameras for Imaging Atmospheric Cherenkov Telescopes. Geiger-mode Avalanche Photodiodes (G-APD) are a promising candidate to replace the commonly used photomultiplier tubes by offering a larger PDE and in addition a facilitated handling. The FACT (First G-APD Cherenkov Telescope) project evaluates the feasibility of this change by building a camera based on 1440 G-APDs for an existing small telescope. As a first step towards a full camera, a prototype module using 144 G-APDs was successfully built and tested. The strong temperature dependence of G-APDs is compensated using a feedback system, which allows to keep the gain of the G-APDs constant to 0.5%.

  19. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  20. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  1. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  2. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  3. Vasomotor assessment by camera-based photoplethysmography

    Directory of Open Access Journals (Sweden)

    Trumpp Alexander

    2016-09-01

    Full Text Available Camera-based photoplethysmography (cbPPG is a novel technique that allows the contactless acquisition of cardio-respiratory signals. Previous works on cbPPG most often focused on heart rate extraction. This contribution is directed at the assessment of vasomotor activity by means of cameras. In an experimental study, we show that vasodilation and vasoconstriction both lead to significant changes in cbPPG signals. Our findings underline the potential of cbPPG to monitor vasomotor functions in real-life applications.

  4. Scintillating track image camera-SCITIC

    CERN Document Server

    Sato, Akira; Ieiri, Masaharu; Iwata, Soma; Kadowaki, Tetsuhito; Kurosawa, Maki; Nagae, Tomohumi; Nakai, Kozi

    2004-01-01

    A new type of track detector, scintillating track image camera (SCITIC) has been developed. Scintillating track images of particles in a scintillator are focused by an optical lens system on a photocathode on image intesifier tube (IIT). The image signals are amplified by an IIT-cascade and stored by a CCD camera. The performance of the detector has been tested with cosmic-ray muons and with pion- and proton-beams from the KEK 12-GeV proton synchrotron. Data of the test experiments have shown promising features of SCITIC as a triggerable track detector with a variety of possibilities. 7 Refs.

  5. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  6. Camera-enabled techniques for organic synthesis

    Science.gov (United States)

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  7. Nitrogen camera: detection of antipersonnel mines

    Science.gov (United States)

    Trower, W. Peter; Saunders, Anna W.; Shvedunov, Vasiliy I.

    1997-01-01

    We describe a nuclear technique, the nitrogen camera, with which we have produced images of elemental nitrogen in concentrations and with surface densities typical of buried plastic anti-personnel mines. We have, under laboratory conditions, obtained images of nitrogen in amounts substantially less than in these small 200 g mines. We report our progress in creating the enabling technology to make the nitrogen camera a field deployable instrument: a mobile 70 MeV electron racetrack microtron and scintillator/semiconductor materials and the detectors based on them.

  8. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  9. Virtual camera synthesis for soccer game replays

    Directory of Open Access Journals (Sweden)

    S. Sagas

    2013-07-01

    Full Text Available In this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays.

  10. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...... per second can be accommodated with less than 0.5% loss in any one channel. This corresponds to a calculated deadtime of 5 nsec. The multidetector camera is being used for 133Xe dynamic studies of regional cerebral blood flow in man and for 99mTc and 197 Hg static imaging of the brain....

  11. Digital Camera as Gloss Measurement Device

    Directory of Open Access Journals (Sweden)

    Mihálik A.

    2016-05-01

    Full Text Available Nowadays digital cameras with both high resolution and the high dynamic range (HDR can be considered as parallel multiple sensors producing multiple measurements at once. In this paper we describe a technique for processing the captured HDR data and than fit them to theoretical surface reflection models in the form of bidirectional reflectance distribution function (BRDF. Finally, the tabular BRDF can be used to calculate the gloss reflection of the surface. We compare the captured glossiness by digital camera with gloss measured with the industry device and conclude that the results fit well in our experiments.

  12. Satisfacción de los usuarios con el servicio de teleoftalmología con cámara no midriática para el cribado de la retinopatía diabética User satisfaction with teleophthalmology with nonmydriatic camera for diabetic retinopathy screening

    Directory of Open Access Journals (Sweden)

    Mª José García Serrano

    2009-08-01

    Full Text Available Objetivo: Conocer la satisfacción de los pacientes diabéticos respecto al servicio de retinografía. Métodos: Encuesta telefónica a 64 usuarios entre julio de 2006 y marzo de 2007. El 57,8% eran varones. La edad media fue de 65,2 años. El 54,7% procedían de un equipo de atención primaria (EAP urbano. Las variables fueron: sexo, edad, EAP, retinografía/tonometría (normal/patológica, accesibilidad, puntualidad, limpieza, tiempo dedicado, explicaciones, buenas manos, amabilidad en escala mala/regular/buena/muy buena/perfecta, y satisfacción telefónica/global en escala 0-10. Resultados: Se valoró (>80% la accesibilidad, la puntualidad, la limpieza, el tiempo dedicado, las buenas manos y la amabilidad. La media de satisfacción global fue de 8,38 (intervalo de confianza del 95% [IC95%]: 8,03-8,72 y la telefónica fue de 7,88 (IC95%: 7,4-8,36. Se asociaron (p8 el tiempo de visita, las explicaciones comprensibles y la llamada telefónica que informa del resultado. La regresión logística muestra (pObjective: To determine satisfaction with the retinography service among patients with diabetes. Methods: We performed a telephone survey of 64 users from July 2006 to March 2007. The mean age was 65.2 years, 57.8% were men, and 54.7% were from urban primary care centers. The variables analyzed were sex, age, primary care team, retinography/tonometry (normal/pathologic, accessibility, punctuality, hygiene, consultation length, explanations, good hands, kindness on a scale rated bad/average/good/very good/perfect, satisfaction with the telephone call informing users of the results of the examination, and overall satisfaction, both rated on a scale from 0 to 10. Results: Accessibility, punctuality, hygiene, consultation length, good hands and kindness received scores of >80%. The mean overall satisfaction score was 8.38% (95% confidence interval [95%CI]: 8.03-8.72, while satisfaction with the telephone call was 7.88% (95%CI: 7.4-8.36. The variables associated (p8 were consultation length, receiving comprehensible explanations, and the telephone call informing patients of the results of the examination. Logistic regression showed (p<0.05 that the variable with the greatest influence on satisfaction was the telephone call. Conclusions: the retinography service was favorably evaluated. The variable with the greatest influence on high satisfaction was communicating the results by telephone. The service will promote new technologies (SMS, e-mail.

  13. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  14. Distribution of intraretinal exudates in diabetic macular edema during anti-vascular endothelial growth factor therapy observed by spectral domain optical coherence tomography and fundus photography.

    Science.gov (United States)

    Pemp, Berthold; Deák, Gábor; Prager, Sonja; Mitsch, Christoph; Lammer, Jan; Schmidinger, Gerald; Scholda, Christoph; Schmidt-Erfurth, Ursula; Bolz, Matthias

    2014-12-01

    To evaluate changes in the distribution and morphology of intraretinal microexudates and hard exudates (HEs) during intravitreal anti-vascular endothelial growth factor therapy in patients with persistent diabetic macular edema. Twenty-four patients with persistent diabetic macular edema after photocoagulation were investigated in this prospective cohort study. Each eye was assigned to a loading dose of three anti-vascular endothelial growth factor treatments at monthly intervals. Additional single treatments were performed if diabetic macular edema persisted or recurred. Intraretinal exudates were analyzed over 6 months using spectral domain optical coherence tomography (SD-OCT) and fundus photography. Before treatment, microexudates were detected by SD-OCT as hyperreflective foci in 24 eyes, whereas HEs were seen in 22 eyes. During therapy, HE increased significantly in number and size. This was accompanied by accumulation of microexudates in the outer retina. Enlargement of hyperreflective structures in SD-OCT was accompanied by enlargement of HE at corresponding fundus locations. A rapid reduction in diabetic macular edema was seen in all patients, but to varying degrees. Patients with hemoglobin A1c levels exudates at corresponding locations in fundus photography and SD-OCT. Intraretinal aggregates of microexudates detectable as hyperreflective foci by SD-OCT may compose and precede HE before they become clinically visible.

  15. The effects of pH on the affinity of pirenzepine for muscarinic receptors in the guinea-pig ileum and rat fundus strip.

    Science.gov (United States)

    Barlow, R. B.; Chan, M.

    1982-01-01

    1 Dose-ratios obtained with pirenzepine on the guinea-pig ileum at 30 degrees C are indistinguishable from those obtained at 37 degrees C. 2. In 0.1 M NaCl at 37 degrees C the pKa of pirenzepine for the loss of its last ionizable proton is 8.2. The ionization of pirenzepine is therefore markedly affected by changes in pH in the physiological range. 3 In experiments with pirenzepine on guinea-pig ileum and rat fundus made over a range of pH, the dose-ratio increases with the proportion of the protonated form present. As expected, the slope of the graph of dose-ratio against proportion protonated depends on the concentration of antagonist. The changes in pH produce only small effects on dose-ratios obtained with pirenzepine monomethiodide. These effects of pH can account for some of the differences between estimates of the affinity of pirenzepine. 4 The logarithm of the affinity constant of the protonated form of pirenzepine for the receptors in guinea-pig ileum is estimated to be 6.93, compared with 6.94 for the receptors in rat fundus. However, for the non-protonated form the values appear to be below 5 for the ileum compared with about 6.4 for the rat fundus. PMID:6897199

  16. Sensitivity and specificity of monochromatic photography of the ocular fundus in differentiating optic nerve head drusen and optic disc oedema: optic disc drusen and oedema.

    Science.gov (United States)

    Gili, Pablo; Flores-Rodríguez, Patricia; Yangüela, Julio; Orduña-Azcona, Javier; Martín-Ríos, María Dolores

    2013-03-01

    Evaluation of the efficacy of monochromatic photography of the ocular fundus in differentiating optic nerve head drusen (ONHD) and optic disc oedema (ODE). Sixty-six patients with ONHD, 31 patients with ODE and 70 healthy subjects were studied. Colour and monochromatic fundus photography with different filters (green, red and autofluorescence) were performed. The results were analysed blindly by two observers. The sensitivity, specificity and interobserver agreement (k) of each test were assessed. Colour photography offers 65.5 % sensitivity and 100 % specificity for the diagnosis of ONHD. Monochromatic photography improves sensitivity and specificity and provides similar results: green filter (71.20 % sensitivity, 96.70 % specificity), red filter (80.30 % sensitivity, 96.80 % specificity), and autofluorescence technique (87.8 % sensitivity, 100 % specificity). The interobserver agreement was good with all techniques used: autofluorescence (k = 0.957), green filter (k = 0.897), red filter (k = 0.818) and colour (k = 0.809). Monochromatic fundus photography permits ONHD and ODE to be differentiated, with good sensitivity and very high specificity. The best results were obtained with autofluorescence and red filter study.

  17. Immunohistochemical localization of the antioxidant enzymes biliverdin reductase and heme oxygenase-2 in human and pig gastric fundus.

    Science.gov (United States)

    Colpaert, Erwin E; Timmermans, Jean Pierre; Lefebvre, Romain A

    2002-04-01

    The intrinsic antioxidant capacities of the bile pigments biliverdin and bilirubin are increasingly recognized since both heme degradation products can exert beneficial cytoprotective effects due to their scavenging of oxygen free radicals and interaction with antioxidant vitamins. Several studies have been published on the localization of the carbon monoxide producing enzyme heme oxygenase-2 (HO-2), which concomitantly generates biliverdin; histochemical data on the distribution of biliverdin reductase (BVR), converting biliverdin to bilirubin, are still very scarce in large mammals including humans. The present study revealed by means of immunohistochemistry the presence of BVR and HO-2 in mucosal epithelial cells and in the endothelium of intramural vessels of both human and porcine gastric fundus. In addition, co-labeling with the specific neural marker protein-gene product 9.5 (PGP 9.5) demonstrated that both BVR and HO-2 were present in all intrinsic nerve cell bodies of both submucous and myenteric plexuses, while double labeling with c-Kit antibody confirmed their presence in intramuscular interstitial cells of Cajal (ICC). Our results substantiate the hypothesis that BVR, through the production of the potent antioxidant bilirubin, might be an essential component of normal physiologic gastrointestinal defense in man and pig.

  18. Retinopathy in severe malaria in Ghanaian children - overlap between fundus changes in cerebral and non-cerebral malaria

    DEFF Research Database (Denmark)

    Essuman, Vera A; Ntim-Amponsah, Christine T; Astrup, Birgitte S

    2010-01-01

    . Secondly, to determine any association between retinopathy and the occurrence of convulsions in patients with CM. Methods and subjects A cross-sectional study of consecutive patients on admission with severe malaria who were assessed for retinal signs, at the Department of Child Health, Korle-Bu Teaching...... Hospital, Accra, from July to August 2002 was done. All children had dilated-fundus examination by direct and indirect ophthalmoscopy. RESULTS: Fifty-eight children aged between six months and nine years were recruited. Twenty six(45%) had CM, 22 with convulsion; 26(45%) had SA and six(10%) had RD. Any...... retinopathy was seen in: CM 19(73%), SA 14(54%), RD 3(50.0%), CM with convulsion 15(68%) and CM without convulsion 4(100%). Comparison between CM versus non-CM groups showed a significant risk relationship between retinal whitening and CM(OR=11.0, CI=2.2- 56.1, p= 0.001). There was no significant association...

  19. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  20. Camera shutter is actuated by electric signal

    Science.gov (United States)

    Neff, J. E.

    1964-01-01

    Rotary solenoid energized by an electric signal opens a camera shutter, and when the solenoid is de-energized a spring closes it. By the use of a microswitch, the shutter may be opened and closed in one continuous, rapid operation when the solenoid is actuated.

  1. Multimodal sensing-based camera applications

    Science.gov (United States)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  2. The Legal Implications of Surveillance Cameras

    Science.gov (United States)

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  3. Autofocus method for scanning remote sensing cameras.

    Science.gov (United States)

    Lv, Hengyi; Han, Chengshan; Xue, Xucheng; Hu, Changhong; Yao, Cheng

    2015-07-10

    Autofocus methods are conventionally based on capturing the same scene from a series of positions of the focal plane. As a result, it has been difficult to apply this technique to scanning remote sensing cameras where the scenes change continuously. In order to realize autofocus in scanning remote sensing cameras, a novel autofocus method is investigated in this paper. Instead of introducing additional mechanisms or optics, the overlapped pixels of the adjacent CCD sensors on the focal plane are employed. Two images, corresponding to the same scene on the ground, can be captured at different times. Further, one step of focusing is done during the time interval, so that the two images can be obtained at different focal plane positions. Subsequently, the direction of the next step of focusing is calculated based on the two images. The analysis shows that the method investigated operates without restriction of the time consumption of the algorithm and realizes a total projection for general focus measures and algorithms from digital still cameras to scanning remote sensing cameras. The experiment results show that the proposed method is applicable to the entire focus measure family, and the error ratio is, on average, no more than 0.2% and drops to 0% by reliability improvement, which is lower than that of prevalent approaches (12%). The proposed method is demonstrated to be effective and has potential in other scanning imaging applications.

  4. Lights, Camera, Read! Arizona Reading Program Manual.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  5. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  6. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  7. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  8. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  9. Metasurface lens: Shrinking the camera size

    Science.gov (United States)

    Sun, Cheng

    2017-01-01

    A miniaturized camera has been developed by integrating a planar metasurface lens doublet with a CMOS image sensor. The metasurface lens doublet corrects the monochromatic aberration and thus delivers nearly diffraction-limited image quality over a wide field of view.

  10. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  11. Mapping large environments with an omnivideo camera

    NARCIS (Netherlands)

    Esteban, I.; Booij, O.; Zivkovic, Z.; Krose, B.

    2009-01-01

    We study the problem of mapping a large indoor environment using an omnivideo camera. Local features from omnivideo images and epipolar geometry are used to compute the relative pose between pairs of images. These poses are then used in an Extended Information Filter using a trajectory based represe

  12. Digital Camera Control for Faster Inspection

    Science.gov (United States)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  13. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  14. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  15. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  16. Utility of hard exudates for the screening of macular edema.

    Science.gov (United States)

    Litvin, Taras V; Ozawa, Glen Y; Bresnick, George H; Cuadros, Jorge A; Muller, Matthew S; Elsner, Ann E; Gast, Thomas J

    2014-04-01

    The purpose of this study was to determine whether hard exudates (HEs) within one disc diameter of the foveola is an acceptable criterion for the referral of diabetic patients suspected of clinically significant macular edema (CSME) in a screening setting. One hundred forty-three adults diagnosed as having diabetes mellitus were imaged using a nonmydriatic digital fundus camera at the Alameda County Medical Center in Oakland, CA. Nonstereo fundus images were graded independently for the presence of HE near the center of the macula by two graders according to the EyePACS grading protocol. The patients also received a dilated fundus examination on a separate visit. Clinically significant macular edema was determined during the dilated fundus examination using the criteria set forth by the Early Treatment Diabetic Retinopathy Study. Subsequently, the sensitivity and specificity of HEs within one disc diameter of the foveola in nonstereo digital images used as a surrogate for the detection of CSME diagnosed by live fundus examination were calculated. The mean (±SD) age of 103 patients included in the analysis was 56 ± 17 years. Clinically significant macular edema was diagnosed in 15.5% of eyes during the dilated examination. For the right eyes, the sensitivity of HEs within one disc diameter from the foveola as a surrogate for detecting CSME was 93.8% for each of the graders; the specificity values were 88.5 and 85.1%. For the left eyes, the sensitivity values were 93.8 and 75% for each of the two graders, respectively; the specificity was 87.4% for both graders. This study supports the use of HE within a disc diameter of the center of the macula in nonstereo digital images for CSME detection in a screening setting.

  17. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  18. Measuring rainfall with low-cost cameras

    Science.gov (United States)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  19. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  20. NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Pan Feng; Wang Xuanyin

    2004-01-01

    A new versatile camera calibration technique for machine vision using off-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, a new camera distortion rectification technology based on line-rectification is proposed. A full-camera-distortion model is introduced and a linear algorithm is provided to obtain the solution. After the camera rectification intrinsic and extrinsic parameters are obtained based on the relationship between the homograph and absolute conic. This technology needs neither a high-accuracy three-dimensional calibration block, nor a complicated translation or rotation platform. Both simulations and experiments show that this method is effective and robust.

  1. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  2. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  3. 梅毒性后葡萄膜炎的眼底自发荧光与眼底血管荧光造影特征%Characteristics of fundus autofluorescence and fundus fluorescein angiography in syphilitic posterior uveitis

    Institute of Scientific and Technical Information of China (English)

    龙永华; 王卫峻; 宫媛媛; 孙晓东

    2013-01-01

    背景 眼底自发荧光(FAF)能反映视网膜色素上皮(RPE)细胞的功能状态,作为一种无创的检查手段,广泛应用于视网膜疾病的诊断,而其在梅毒性后葡萄膜炎中的应用国内外尚未见相关报道. 目的 观察和对比首诊于眼科的梅毒性后葡萄膜炎的FAF及荧光素眼底血管造影(FFA)和吲哚青绿血管造影(ICGA)的特征.方法 回顾性分析2010年5月至2012年10月在上海交通大学附属第一人民医眼科诊断为梅毒性后葡萄膜炎的患者18例27眼的临床资料,所有患者均经血清学检查确诊为梅毒,根据眼部的临床表现分为急性期(病程2个月内)组和慢性迁延期(病程2个月以上)组,均进行FFA、ICGA及FAF检查,对各期患者的FAF表现与FFA、ICGA特征进行对比和分析. 结果 梅毒性后葡萄膜炎患者的FFA像主要表现为后极部视网膜血管渗漏改变及视网膜的斑驳状透见荧光改变,部分患者伴有视盘着染或荧光素渗漏,急性期患者可见黄斑区渗出的低荧光,慢性迁延期患者可出现囊样水肿的高荧光.患者的ICGA显示,视网膜后极部出现弥漫性点片状低荧光,造影晚期更明显.患者的FAF像主要表现为后极部弥漫性荧光增强,尤以急性期患者更为明显,可见斑驳状荧光,局部有点片状FAF减弱;慢性迁延期患者FAF缺失更明显;伴有视盘水肿及黄斑水肿的患者表现为相应区域的低荧光. 结论 梅毒性后葡萄膜炎以眼底后极部视网膜血管炎表现为主,ICGA显示出广泛的RPE及脉络膜受累,而FAF表现提示急性期患者有RPE代谢的障碍,慢性迁延期患者为RPE萎缩或缺失.FAF是反映RPE形态变化的辅助诊断指标.%Background The fundus autofluorescence (FAF)can reflect the function of retinal pigment epithelium(RPE) cell.As an invasive examination,it has been extensive used in retina disease,but there has not any report in syphilitic posterior uveitis.Objective This study was

  4. Registration of Sub-Sequence and Multi-Camera Reconstructions for Camera Motion Estimation

    Directory of Open Access Journals (Sweden)

    Michael Wand

    2010-08-01

    Full Text Available This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

  5. The AOTF-based NO2 camera

    Science.gov (United States)

    Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier

    2016-12-01

    The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the

  6. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  7. Comparison of Cysts in Red and Green Images for Diabetic Macular Edema.

    Science.gov (United States)

    Alhamami, Mastour A; Elsner, Ann E; Malinovsky, Victor E; Clark, Christopher A; Haggerty, Bryan P; Ozawa, Glen Y; Cuadros, Jorge A; Baskaran, Karthikeyan; Gast, Thomas J; Litvin, Taras V; Muller, Matthew S; Brahm, Shane G; Young, Stuart B; Miura, Masahiro

    2017-02-01

    To investigate whether cysts in diabetic macular edema are better visualized in the red channel of color fundus camera images, as compared with the green channel, because color fundus camera screening methods that emphasize short-wavelength light may miss cysts in patients with dark fundi or changes to outer blood retinal barrier. Fundus images for diabetic retinopathy photoscreening were acquired for a study with Aeon Imaging, EyePACS, University of California Berkeley, and Indiana University. There were 2047 underserved, adult diabetic patients, of whom over 90% self-identified as a racial/ethnic identify other than non-Hispanic white. Color fundus images at nominally 45 degrees were acquired with a Canon Cr-DGi non-mydriatic camera (Tokyo, Japan) then graded by an EyePACS certified grader. From the 148 patients graded to have clinically significant macular edema by the presence of hard exudates in the central 1500 μm of the fovea, we evaluated macular cysts in 13 patients with cystoid macular edema. Age ranged from 33 to 68 years. Color fundus images were split into red, green, and blue channels with custom Matlab software (Mathworks, Natick, MA). The diameter of a cyst or confluent cysts was quantified in the red-channel and green-channel images separately. Cyst identification gave complete agreement between red-channel images and the standard full-color images. This was not the case for green-channel images, which did not expose cysts visible with standard full-color images in five cases, who had dark fundi. Cysts appeared more numerous and covered a larger area in the red channel (733 ± 604 μm) than in the green channel (349 ± 433 μm, P < .006). Cysts may be underdetected with the present fundus camera methods, particularly when short-wavelength light is emphasized or in patients with dark fundi. Longer wavelength techniques may improve the detection of cysts and provide more information concerning the early stages of diabetic macular edema or the outer

  8. Comparison of Cysts in Red and Green Images for Diabetic Macular Edema

    Science.gov (United States)

    Alhamami, Mastour A.; Elsner, Ann E.; Malinovsky, Victor E.; Clark, Christopher A.; Haggerty, Bryan P.; Ozawa, Glen Y.; Cuadros, Jorge A.; Baskaran, Karthikeyan; Gast, Thomas J.; Litvin, Taras V.; Muller, Matthew S.; Brahm, Shane G.; Young, Stuart B.; Miura, Masahiro

    2017-01-01

    ABSTRACT Purpose To investigate whether cysts in diabetic macular edema are better visualized in the red channel of color fundus camera images, as compared with the green channel, because color fundus camera screening methods that emphasize short-wavelength light may miss cysts in patients with dark fundi or changes to outer blood retinal barrier. Methods Fundus images for diabetic retinopathy photoscreening were acquired for a study with Aeon Imaging, EyePACS, University of California Berkeley, and Indiana University. There were 2047 underserved, adult diabetic patients, of whom over 90% self-identified as a racial/ethnic identify other than non-Hispanic white. Color fundus images at nominally 45 degrees were acquired with a Canon Cr-DGi non-mydriatic camera (Tokyo, Japan) then graded by an EyePACS certified grader. From the 148 patients graded to have clinically significant macular edema by the presence of hard exudates in the central 1500 μm of the fovea, we evaluated macular cysts in 13 patients with cystoid macular edema. Age ranged from 33 to 68 years. Color fundus images were split into red, green, and blue channels with custom Matlab software (Mathworks, Natick, MA). The diameter of a cyst or confluent cysts was quantified in the red-channel and green-channel images separately. Results Cyst identification gave complete agreement between red-channel images and the standard full-color images. This was not the case for green-channel images, which did not expose cysts visible with standard full-color images in five cases, who had dark fundi. Cysts appeared more numerous and covered a larger area in the red channel (733 ± 604 μm) than in the green channel (349 ± 433 μm, P < .006). Conclusions Cysts may be underdetected with the present fundus camera methods, particularly when short-wavelength light is emphasized or in patients with dark fundi. Longer wavelength techniques may improve the detection of cysts and provide more information concerning the early

  9. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  10. Method for out-of-focus camera calibration.

    Science.gov (United States)

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  11. Robust pedestrian detection by combining visible and thermal infrared cameras

    National Research Council Canada - National Science Library

    Lee, Ji Hoon; Choi, Jong-Suk; Jeon, Eun Som; Kim, Yeong Gon; Le, Toan Thanh; Shin, Kwang Yong; Lee, Hyeon Chang; Park, Kang Ryoung

    2015-01-01

    .... However, most of the previous studies use a single camera system, either a visible light or thermal camera, and their performances are affected by various factors such as shadow, illumination change...

  12. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  13. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  14. Contractile action of galanin analogues on rat isolated gastric fundus strips is modified by tachyphylaxis to substance P.

    Science.gov (United States)

    Korolkiewicz, R; Sliwiński, W; Rekowski, P; Halama, A; Mucha, P; Szczurowicz, A; Guzowski, P; Korolkiewicz, K Z

    1996-06-01

    This study was undertaken to characterize the interaction of porcine galanin (Gal) and some of its analogues with their receptors on rat gastric fundus muscle strips. Gal, galantide (M15) and Gal(1-14)-[Abu8]SCY-I evoked concentration-dependent contractions of gastric smooth muscle strips. Reproducible effects were observed in concentrations of 1-300, 3-1000 and 100-3000 nM, respectively. Specific EC50 for the contractile effect equalled 13.70 and 187 nM. Hill's coefficient for Gal is 1.03 indicating an interaction of one Gal molecule with one receptor, fulfilling the criteria of classical receptor theory. For M15 and Gal(1-14)-[Abu8]SCY-I Hill's coefficients are different from 1, namely 0.73 and 1.56, pointing out that the principle of interaction of one drug molecule with one receptor may not apply. The contraction induced by 300 nM of Gal was not significantly modified by tachyphylaxis to substance P (SP). On the contrary the introduction of tachyphylaxis to SP decreased the contractile effects of M15 and Gal(1-14)-[Abu8]SCY-I by about 57.7 +/- 3% and 39.6 +/- 5%, respectively. The findings suggest that contractile actions of M15 and Gal(1-14)-[Abu8]SCY-I are probably not only due to their agonist activities at Gal receptors but may result from a subsequent stimulation of receptors for SP or release of endogenous SP.

  15. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  16. The large APEX bolometer camera LABOCA

    Science.gov (United States)

    Siringo, Giorgio; Kreysa, Ernst; Kovacs, Attila; Schuller, Frederic; Weiß, Axel; Esch, Walter; Gemünd, Hans-Peter; Jethava, Nikhil; Lundershausen, Gundula; Güsten, Rolf; Menten, Karl M.; Beelen, Alexandre; Bertoldi, Frank; Beeman, Jeffrey W.; Haller, Eugene E.; Colin, Angel

    2008-07-01

    A new facility instrument, the Large APEX Bolometer Camera (LABOCA), developed by the Max-Planck-Institut für Radioastronomie (MPIfR, Bonn, Germany), has been commissioned in May 2007 for operation on the Atacama Pathfinder Experiment telescope (APEX), a 12 m submillimeter radio telescope located at 5100 m altitude on Llano de Chajnantor in northern Chile. For mapping, this 295-bolometer camera for the 870 micron atmospheric window operates in total power mode without wobbling the secondary mirror. One LABOCA beam is 19 arcsec FWHM and the field of view of the complete array covers 100 square arcmin. Combined with the high efficiency of APEX and the excellent atmospheric transmission at the site, LABOCA offers unprecedented capability in large scale mapping of submillimeter continuum emission. Details of design and operation are presented.

  17. First polarised light with the NIKA camera

    CERN Document Server

    Ritacco, A; Adane, A; Ade, P; André, P; Beelen, A; Belier, B; Benoît, A; Bideaud, A; Billot, N; Bourrion, O; Calvo, M; Catalano, A; Coiffard, G; Comis, B; D'Addabbo, A; Désert, F -X; Doyle, S; Goupy, J; Kramer, C; Leclercq, S; Macías-Pérez, J F; Martino, J; Mauskopf, P; Maury, A; Mayet, F; Monfardini, A; Pajot, F; Pascale, E; Perotto, L; Pisano, G; Ponthieu, N; Rebolo-Iglesias, M; Réveret, V; Rodriguez, L; Savini, G; Schuster, K; Sievers, A; Thum, C; Triqueneaux, S; Tucker, C; Zylka, R

    2015-01-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer Half Wave Plate. Then, the signal is analysed by a wire grid and finally absorbed by the LEKIDs. The small time constant (< 1ms ) of the LEKID detectors combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this pa- per we present results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at mm wavelength.

  18. First Polarised Light with the NIKA Camera

    Science.gov (United States)

    Ritacco, A.; Adam, R.; Adane, A.; Ade, P.; André, P.; Beelen, A.; Belier, B.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; D'Addabbo, A.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Leclercq, S.; Macías-Pérez, J. F.; Martino, J.; Mauskopf, P.; Maury, A.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Rebolo-Iglesias, M.; Revéret, V.; Rodriguez, L.; Savini, G.; Schuster, K.; Sievers, A.; Thum, C.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2016-08-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer half- wave plate. Then, the signal is analyzed by a wire grid and finally absorbed by the lumped element kinetic inductance detectors (LEKIDs). The small time constant (ms ) of the LEKIDs combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this paper, we present the results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at millimeter wavelength.

  19. Camera Augmented Mobile C-arm

    Science.gov (United States)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  20. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.