WorldWideScience

Sample records for night vision system

  1. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  2. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  3. Night vision: changing the way we drive

    Science.gov (United States)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  4. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  5. Airborne Use of Night Vision Systems

    Science.gov (United States)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  6. Performance characterization of night vision equipment based on Triangle Orientation Discrimination (TOD) methodology

    NARCIS (Netherlands)

    Laurent, N.; Lejard, C.; Deltel, G.; Bijl, P.

    2013-01-01

    Night vision equipment is crucial in order to accomplish supremacy and safety of the troops on the battlefield. Evidently, system integrators, MODs and end-users need access to reliable quantitative characterization of the expected field performance when using night vision equipment. The Image

  7. Night Vision Manual for the Flight Surgeon.

    Science.gov (United States)

    1985-08-01

    macula and fovea centralis. 4. Duality theory of vision-extends sensitivity of vision over 100,000 times (Fig. 12). ~Im Uilting Ullmlrage WVIVIWCentral...lowered night vision capa- bilities due to disease or degenerations . F. Hypoxia 1. Decrement of central vision due to 02 lack is quite small; such as, at

  8. Night-vision goggles for night-blind subjects : subjective evaluation after 2 years of use

    NARCIS (Netherlands)

    Hartong, D. T.; Kooijman, A. C.

    Purpose: To evaluate the usefulness of night-vision goggles (NVG) for night-blind subjects after 1 and 2 years of use. Methods: Eleven night-blind subjects with retinitis pigmentosa used NVG for a 2-year period. At the end of each year, they were requested to fill-in two questionnaires regarding

  9. Progress in color night vision

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2012-01-01

    We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused

  10. Development of an Automatic Testing Platform for Aviator’s Night Vision Goggle Honeycomb Defect Inspection

    Directory of Open Access Journals (Sweden)

    Bo-Lin Jian

    2017-06-01

    Full Text Available Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator’s night vision imaging systems AN/AVS-6(V1 and AN/AVS-6(V2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.

  11. Sensor fusion to enable next generation low cost Night Vision systems

    Science.gov (United States)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  12. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    Science.gov (United States)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  13. Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.

    Science.gov (United States)

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.

  14. Night vision and electro-optics technology transfer, 1972 - 1981

    Science.gov (United States)

    Fulton, R. W.; Mason, G. F.

    1981-09-01

    The purpose of this special report, 'Night Vision and Electro-Optics Technology Transfer 1972-1981,' is threefold: To illustrate, through actual case histories, the potential for exploiting a highly developed and available military technology for solving non-military problems. To provide, in a layman's language, the principles behind night vision and electro-optical devices in order that an awareness may be developed relative to the potential for adopting this technology for non-military applications. To obtain maximum dollar return from research and development investments by applying this technology to secondary applications. This includes, but is not limited to, applications by other Government agencies, state and local governments, colleges and universities, and medical organizations. It is desired that this summary of Technology Transfer activities within Night Vision and Electro-Optics Laboratory (NV/EOL) will benefit those who desire to explore one of the vast technological resources available within the Defense Department and the Federal Government.

  15. Aviator's night vision system (ANVIS) in Operation Enduring Freedom (OEF): user acceptability survey

    Science.gov (United States)

    Hiatt, Keith L.; Trollman, Christopher J.; Rash, Clarence E.

    2010-04-01

    In 1973, the U.S. Army adopted night vision devices for use in the aviation environment. These devices are based on the principle of image intensification (I2) and have become the mainstay for the aviator's capability to operate during periods of low illumination, i.e., at night. In the nearly four decades that have followed, a number of engineering advancements have significantly improved the performance of these devices. The current version, using 3rd generation I2 technology is known as the Aviator's Night Vision Imaging System (ANVIS). While considerable experience with performance has been gained during training and peacetime operations, no previous studies have looked at user acceptability and performance issues in a combat environment. This study was designed to compare Army Aircrew experiences in a combat environment to currently available information in the published literature (all peacetime laboratory and field training studies) and to determine if the latter is valid. The purpose of this study was to identify and assess aircrew satisfaction with the ANVIS and any visual performance issues or problems relating to its use in Operation Enduring Freedom (OEF). The study consisted of an anonymous survey (based on previous validated surveys used in the laboratory and training environments) of 86 Aircrew members (64% Rated and 36% Non-rated) of an Aviation Task Force approximately 6 months into their OEF deployment. This group represents an aggregate of >94,000 flight hours of which ~22,000 are ANVIS and ~16,000 during this deployment. Overall user acceptability of ANVIS in a combat environment will be discussed.

  16. Night vision imaging systems design, integration, and verification in military fighter aircraft

    Science.gov (United States)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and

  17. Digital Enhancement of Night Vision and Thermal Images

    National Research Council Canada - National Science Library

    Teo, Chek

    2003-01-01

    .... This thesis explores the effect of the Contrast Limited Adaptive Histogram Equalization (CLAHE) process on night vision and thermal images With better contrast, target detection and discrimination can be improved...

  18. Lens Systems Incorporating A Zero Power Corrector Objectives And Magnifiers For Night Vision Applications

    Science.gov (United States)

    McDowell, M. W.; Klee, H. W.

    1986-02-01

    The use of the zero power corrector concept has been extended to the design of objective lenses and magnifiers suitable for use in night vision goggles. A novel design which can be used as either an f/1.2 objective or an f/2 magnifier is also described.

  19. Vision and Displays for Military and Security Applications The Advanced Deployable Day/Night Simulation Project

    CERN Document Server

    Niall, Keith K

    2010-01-01

    Vision and Displays for Military and Security Applications presents recent advances in projection technologies and associated simulation technologies for military and security applications. Specifically, this book covers night vision simulation, semi-automated methods in photogrammetry, and the development and evaluation of high-resolution laser projection technologies for simulation. Topics covered include: advances in high-resolution projection, advances in image generation, geographic modeling, and LIDAR imaging, as well as human factors research for daylight simulation and for night vision devices. This title is ideal for optical engineers, simulator users and manufacturers, geomatics specialists, human factors researchers, and for engineers working with high-resolution display systems. It describes leading-edge methods for human factors research, and it describes the manufacture and evaluation of ultra-high resolution displays to provide unprecedented pixel density in visual simulation.

  20. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    Science.gov (United States)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  1. Color vision abnormality as an initial presentation of the complete type of congenital stationary night blindness

    Directory of Open Access Journals (Sweden)

    Tan X

    2013-08-01

    Full Text Available Xue Tan, Aya Aoki, Yasuo YanagiDepartment of Ophthalmology, University of Tokyo School of Medicine, Hongo, Bunkyo-ku, Tokyo, JapanAbstract: Patients with the complete form of congenital stationary night blindness (CSNB often have reduced visual acuity, myopia, impaired night vision, and sometimes nystagmus and strabismus, however, they seldom complain of color vision abnormality. A 17-year-old male who was at technical school showed abnormalities in the color perception test for employment, and was referred to our hospital for a detailed examination. He had no family history of color vision deficiency and no other symptoms. During the initial examination, his best-corrected visual acuity was 1.2 in both eyes. His fundus showed no abnormalities except for somewhat yellowish reflex in the fovea of both eyes. Electroretinogram (ERG showed a good response in cone ERG and 30 Hz flicker ERG, however, the bright flash, mixed rod and cone ERG showed a negative type with a reduced b-wave (positive deflection. There was no response in the rod ERG, either. From the findings of the typical ERG, the patient was diagnosed with complete congenital stationary night blindness. This case underscores the importance of ERG in order to diagnose the cause of a color vision anomaly.Keywords: congenital stationary night blindness, CSNB, electroretinogram, ERG, color vision defect

  2. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  3. Color vision abnormality as an initial presentation of the complete type of congenital stationary night blindness.

    Science.gov (United States)

    Tan, Xue; Aoki, Aya; Yanagi, Yasuo

    2013-01-01

    Patients with the complete form of congenital stationary night blindness (CSNB) often have reduced visual acuity, myopia, impaired night vision, and sometimes nystagmus and strabismus, however, they seldom complain of color vision abnormality. A 17-year-old male who was at technical school showed abnormalities in the color perception test for employment, and was referred to our hospital for a detailed examination. He had no family history of color vision deficiency and no other symptoms. During the initial examination, his best-corrected visual acuity was 1.2 in both eyes. His fundus showed no abnormalities except for somewhat yellowish reflex in the fovea of both eyes. Electroretinogram (ERG) showed a good response in cone ERG and 30 Hz flicker ERG, however, the bright flash, mixed rod and cone ERG showed a negative type with a reduced b-wave (positive deflection). There was no response in the rod ERG, either. From the findings of the typical ERG, the patient was diagnosed with complete congenital stationary night blindness. This case underscores the importance of ERG in order to diagnose the cause of a color vision anomaly.

  4. Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers

    Science.gov (United States)

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616

  5. A real-time monitoring system for night glare protection

    Science.gov (United States)

    Ma, Jun; Ni, Xuxiang

    2010-11-01

    When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.

  6. Detection of Special Operations Forces Using Night Vision Devices

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.M.

    2001-10-22

    Night vision devices, such image intensifiers and infrared imagers, are readily available to a host of nations, organizations, and individuals through international commerce. Once the trademark of special operations units, these devices are widely advertised to ''turn night into day''. In truth, they cannot accomplish this formidable task, but they do offer impressive enhancement of vision in limited light scenarios through electronically generated images. Image intensifiers and infrared imagers are both electronic devices for enhancing vision in the dark. However, each is based upon a totally different physical phenomenon. Image intensifiers amplify the available light energy whereas infrared imagers detect the thermal energy radiated from all objects. Because of this, each device operates from energy which is present in a different portion of the electromagnetic spectrum. This leads to differences in the ability of each device to detect and/or identify objects. This report is a compilation of the available information on both state-of-the-art image intensifiers and infrared imagers. Image intensifiers developed in the United States, as well as some foreign made image intensifiers, are discussed. Image intensifiers are categorized according to their spectral response and sensitivity using the nomenclature of GEN I, GEN II, and GEN III. As the first generation of image intensifiers, GEN I, were large and of limited performance, this report will deal with only GEN II and GEN III equipment. Infrared imagers are generally categorized according to their spectral response, sensor materials, and related sensor operating temperature using the nomenclature Medium Wavelength Infrared (MWIR) Cooled and Long Wavelength Infrared (LWIR) Uncooled. MWIR Cooled refers to infrared imagers which operate in the 3 to 5 {micro}m wavelength electromagnetic spectral region and require either mechanical or thermoelectric coolers to keep the sensors operating at 77 K

  7. The Effects of the Personal Armor System for Ground Troops (PASGT) and the Advanced Combat Helmet (ACH) with and without PVS-14 Night Vision Goggles (NVG) on Neck Biomechanics During Dismounted Soldier Movements

    National Research Council Canada - National Science Library

    LaFiandra, Michael; Harman, Everett; Cornelius, Nancy; Frykman, Peter; Gutekunst, David; Nelson, Gabe

    2007-01-01

    Kevlar helmets provide the soldier with basic ballistic and impact protection. However, the helmet has recently become a mounting platform for devices such as night-vision goggles, drop down displays, weapon-aiming systems, etc...

  8. Improving Night Time Driving Safety Using Vision-Based Classification Techniques.

    Science.gov (United States)

    Chien, Jong-Chih; Chen, Yong-Sheng; Lee, Jiann-Der

    2017-09-24

    The risks involved in nighttime driving include drowsy drivers and dangerous vehicles. Prominent among the more dangerous vehicles around at night are the larger vehicles which are usually moving faster at night on a highway. In addition, the risk level of driving around larger vehicles rises significantly when the driver's attention becomes distracted, even for a short period of time. For the purpose of alerting the driver and elevating his or her safety, in this paper we propose two components for any modern vision-based Advanced Drivers Assistance System (ADAS). These two components work separately for the single purpose of alerting the driver in dangerous situations. The purpose of the first component is to ascertain that the driver would be in a sufficiently wakeful state to receive and process warnings; this is the driver drowsiness detection component. The driver drowsiness detection component uses infrared images of the driver to analyze his eyes' movements using a MSR plus a simple heuristic. This component issues alerts to the driver when the driver's eyes show distraction and are closed for a longer than usual duration. Experimental results show that this component can detect closed eyes with an accuracy of 94.26% on average, which is comparable to previous results using more sophisticated methods. The purpose of the second component is to alert the driver when the driver's vehicle is moving around larger vehicles at dusk or night time. The large vehicle detection component accepts images from a regular video driving recorder as input. A bi-level system of classifiers, which included a novel MSR-enhanced KAZE-base Bag-of-Features classifier, is proposed to avoid false negatives. In both components, we propose an improved version of the Multi-Scale Retinex (MSR) algorithm to augment the contrast of the input. Several experiments were performed to test the effects of the MSR and each classifier, and the results are presented in experimental results section

  9. A Comparison of the AVS-9 and the Panoramic Night Vision Goggles During Rotorcraft Hover and Landing

    Science.gov (United States)

    Szoboszlay, Zoltan; Haworth, Loran; Simpson, Carol

    2000-01-01

    A flight test was conducted to assess any differences in pilot-vehicle performance and pilot opinion between the use of a current generation night vision goggle (the AVS-9) and one variant of the prototype panoramic night vision goggle (the PNVGII). The panoramic goggle has more than double the horizontal field-of-view of the AVS-9, but reduced image quality. Overall the panoramic goggles compared well to the AVS-9 goggles. However, pilot comment and data are consistent with the assertion that some of the benefits of additional field-of-view with the panoramic goggles were negated by the reduced image quality of the particular variant of the panoramic goggles tested.

  10. A Comparison of the AVS-9 and the Panoramic Night Vision Goggle During Rotorcraft Hover and Landing

    Science.gov (United States)

    Szoboszlay, Zoltan; Haworth, Loran; Simpson, Carol; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The purpose of this flight test was to measure any differences in pilot-vehicle performance and pilot opinion between the use of the current generation AVS-9 Night Vision Goggle and one variant of the prototype Panoramic Night Vision Goggle (the PNV.GII). The PNVGII has more than double the horizontal field-of-view of the AVS-9, but reduced image quality. The flight path of the AH-1S helicopter was used as a measure of pilot-vehicle performance. Also recorded were subjective measures of flying qualities, physical reserves of the pilot, situational awareness, and display usability. Pilot comment and data indicate that the benefits of additional FOV with the PNVGIIs are to some extent negated by the reduced image quality of the PNVGIIs.

  11. All-CMOS night vision viewer with integrated microdisplay

    Science.gov (United States)

    Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter

    2014-02-01

    The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.

  12. Collaboration between human and nonhuman players in Night Vision Tactical Trainer-Shadow

    Science.gov (United States)

    Berglie, Stephen T.; Gallogly, James J.

    2016-05-01

    The Night Vision Tactical Trainer - Shadow (NVTT-S) is a U.S. Army-developed training tool designed to improve critical Manned-Unmanned Teaming (MUMT) communication skills for payload operators in Unmanned Aerial Sensor (UAS) crews. The trainer is composed of several Government Off-The-Shelf (GOTS) simulation components and takes the trainee through a series of escalating engagements using tactically relevant, realistically complex, scenarios involving a variety of manned, unmanned, aerial, and ground-based assets. The trainee is the only human player in the game and he must collaborate, from his web-based mock operating station, with various non-human players via spoken natural language over simulated radio in order to execute the training missions successfully. Non-human players are modeled in two complementary layers - OneSAF provides basic background behaviors for entities while NVTT provides higher level models that control entity actions based on intent extracted from the trainee's spoken natural dialog with game entities. Dialog structure is modeled based on Army standards for communication and verbal protocols. This paper presents an architecture that integrates the U.S. Army's Night Vision Image Generator (NVIG), One Semi- Automated Forces (OneSAF), a flight dynamics model, as well as Commercial Off The Shelf (COTS) speech recognition and text to speech products to effect an environment with sufficient entity counts and fidelity to enable meaningful teaching and reinforcement of critical communication skills. It further demonstrates the model dynamics and synchronization mechanisms employed to execute purpose-built training scenarios, and to achieve ad-hoc collaboration on-the-fly between human and non-human players in the simulated environment.

  13. Lane Departure System Design using with IR Camera for Night-time Road Conditions

    Directory of Open Access Journals (Sweden)

    Osman Onur Akırmak

    2015-02-01

    Full Text Available Today, one of the largest areas of research and development in the automobile industry is road safety. Many deaths and injuries occur every year on public roads from accidents caused by sleepy drivers, that technology could have been used to prevent. Lane detection at night-time is an important issue in driving assistance systems. This paper deals with vision-based lane detection and tracking at night-time. This project consists of a research and development of an algorithm for automotive systems to detect the departure of vehicle from out of lane. Once the situation is detected, a warning is issued to the driver with sound and visual message through “Head Up Display” (HUD system. The lane departure is detected through the images obtained from a single IR camera, which identifies the departure at a satisfactory accuracy via improved quality of video stream. Our experimental results and accuracy evaluation show that our algorithm has good precision and our detecting method is suitable for night-time road conditions.

  14. Low-Latency Embedded Vision Processor (LLEVS)

    Science.gov (United States)

    2016-03-01

    algorithms, low-latency video processing, embedded image processor, wearable electronics, helmet-mounted systems, alternative night / day imaging...external subsystems and data sources with the device. The establishment of data interfaces in terms of data transfer rates, formats and types are...video signals from Near-visible Infrared (NVIR) sensor, Shortwave IR (SWIR) and Longwave IR (LWIR) is the main processing for Night Vision (NI) system

  15. Low dark current InGaAs detector arrays for night vision and astronomy

    Science.gov (United States)

    MacDougal, Michael; Geske, Jon; Wang, Chad; Liao, Shirong; Getty, Jonathan; Holmes, Alan

    2009-05-01

    Aerius Photonics has developed large InGaAs arrays (1K x 1K and greater) with low dark currents for use in night vision applications in the SWIR regime. Aerius will present results of experiments to reduce the dark current density of their InGaAs detector arrays. By varying device designs and passivations, Aerius has achieved a dark current density below 1.0 nA/cm2 at 280K on small-pixel, detector arrays. Data is shown for both test structures and focal plane arrays. In addition, data from cryogenically cooled InGaAs arrays will be shown for astronomy applications.

  16. A SIMULATION ENVIRONMENT FOR AUTOMATIC NIGHT DRIVING AND VISUAL CONTROL

    OpenAIRE

    Arroyo Rubio, Fernando

    2012-01-01

    This project consists on developing an automatic night driving system in a simulation environment. The simulator I have used is TORCS. TORCS is an Open Source car racing simulator written in C++. It is used as an ordinary car racing game, as a IA racing game and as a research platform. The goal of this thesis is to implement an automatic driving system to control the car under night conditions using computer vision. A camera is implemented inside the vehicle and it will detect the reflective ...

  17. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    Science.gov (United States)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  18. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  19. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  20. Portable real-time color night vision

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2008-01-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized

  1. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  2. A self-report critical incident assessment tool for army night vision goggle helicopter operations.

    Science.gov (United States)

    Renshaw, Peter F; Wiggins, Mark W

    2007-04-01

    The present study sought to examine the utility of a self-report tool that was designed as a partial substitute for a face-to-face cognitive interview for critical incidents involving night vision goggles (NVGs). The use of NVGs remains problematic within the military environment, as these devices have been identified as a factor in a significant proportion of aircraft accidents and incidents. The self-report tool was structured to identify some of the cognitive features of human performance that were associated with critical incidents involving NVGs. The tool incorporated a number of different levels of analysis, ranging from specific behavioral responses to broader cognitive constructs. Reports were received from 30 active pilots within the Australian Army using the NVG Critical Incident Assessment Tool (NVGCIAT). The results revealed a correspondence between specific types of NVG-related errors and elements of the Human Factors Analysis and Classification System (HFACS). In addition, uncertainty emerged as a significant factor associated with the critical incidents that were recalled by operators. These results were broadly consistent with previous research and provide some support for the utility of subjective assessment tools as a means of extracting critical incident-related data when face-to-face cognitive interviews are not possible. In some circumstances, the NVGCIAT might be regarded as a substitute cognitive interview protocol with some level of diagnosticity.

  3. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  4. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  5. Night-life of Bryde's whales

    DEFF Research Database (Denmark)

    Izadi, Sahar; Johnson, Mark; Aguilar de Soto, Natacha

    2018-01-01

    logging tags on resident Bryde'swhales in a busy gulf to study their daily activity patterns. We found that, while whales were active during daytime making energetic lunges to capture tonnes of plankton, they dedicated much of the night to rest. This suggests that whales may rely on vision to find prey...

  6. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  7. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  8. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  9. Hi-Vision telecine system using pickup tube

    Science.gov (United States)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  10. LHCb: A New Nightly Build System for LHCb

    CERN Multimedia

    Clemencic, M

    2013-01-01

    The nightly build system used so far by LHCb has been implemented as an extension on the system developed by CERN PH/SFT group (as presented at CHEP2010). Although this version has been working for many years, it has several limitations in terms of extensibility, management and ease of use, so that it was decided to develop a new version based on a continuous integration system. In this paper we describe a new implementation of the LHCb Nightly Build System based on the open source continuous integration system Jenkins and report on the experience on the configuration of a complex build workflow in Jenkins.

  11. Ultraviolet vision may be widespread in bats

    Science.gov (United States)

    Gorresen, P. Marcos; Cryan, Paul; Dalton, David C.; Wolf, Sandy; Bonaccorso, Frank

    2015-01-01

    Insectivorous bats are well known for their abilities to find and pursue flying insect prey at close range using echolocation, but they also rely heavily on vision. For example, at night bats use vision to orient across landscapes, avoid large obstacles, and locate roosts. Although lacking sharp visual acuity, the eyes of bats evolved to function at very low levels of illumination. Recent evidence based on genetics, immunohistochemistry, and laboratory behavioral trials indicated that many bats can see ultraviolet light (UV), at least at illumination levels similar to or brighter than those before twilight. Despite this growing evidence for potentially widespread UV vision in bats, the prevalence of UV vision among bats remains unknown and has not been studied outside of the laboratory. We used a Y-maze to test whether wild-caught bats could see reflected UV light and whether such UV vision functions at the dim lighting conditions typically experienced by night-flying bats. Seven insectivorous species of bats, representing five genera and three families, showed a statistically significant ‘escape-toward-the-light’ behavior when placed in the Y-maze. Our results provide compelling evidence of widespread dim-light UV vision in bats.

  12. Cellular phone use while driving at night.

    Science.gov (United States)

    Vivoda, Jonathon M; Eby, David W; St Louis, Renée M; Kostyniuk, Lidia P

    2008-03-01

    Use of a cellular phone has been shown to negatively affect one's attention to the driving task, leading to an increase in crash risk. At any given daylight hour, about 6% of US drivers are actively talking on a hand-held cell phone. However, previous surveys have focused only on cell phone use during the day. Driving at night has been shown to be a riskier activity than driving during the day. The purpose of the current study was to assess the rate of hand-held cellular phone use while driving at night, using specialized night vision equipment. In 2006, two statewide direct observation survey waves of nighttime cellular phone use were conducted in Indiana utilizing specialized night vision equipment. Combined results of driver hand-held cellular phone use from both waves are presented in this manuscript. The rates of nighttime cell phone use were similar to results found in previous daytime studies. The overall rate of nighttime hand-held cellular phone use was 5.8 +/- 0.6%. Cellular phone use was highest for females and for younger drivers. In fact, the highest rate observed during the study (of 11.9%) was for 16-to 29-year-old females. The high level of cellular phone use found within the young age group, coupled with the increased crash risk associated with cellular phone use, nighttime driving, and for young drivers in general, suggests that this issue may become an important transportation-related concern.

  13. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  14. What's crucial in night vision goggle simulation?

    Science.gov (United States)

    Kooi, Frank L.; Toet, Alexander

    2005-05-01

    Training is required to correctly interpret NVG imagery. Training night operations with simulated intensified imagery has great potential. Compared to direct viewing with the naked eye, intensified imagery is relatively easy to simulate and the cost of real NVG training is high (logistics, risk, civilian sleep deprivation, pollution). On the surface NVG imagery appears to have a structure similar to daylight imagery. However, in actuality its characteristics differ significantly from those of daylight imagery. As a result, NVG imagery frequently induces visual illusions. To achieve realistic training, simulated NVG imagery should at least reproduce the essential visual limitations of real NVG imagery caused by reduced resolution, reduced contrast, limited field-of-view, the absence of color, and the systems sensitivity to nearby infrared radiation. It is particularly important that simulated NVG imagery represents essential NVG visual characteristics, such as the high reflection of chlorophyll and halos. Current real-time simulation software falls short for training purposes because of an incorrect representation of shadow effects. We argue that the development of shading and shadowing merits priority to close the gap between real and simulated NVG flight conditions. Visual conspicuity can be deployed as an efficient metric to measure the 'perceptual distance' between the real NVG and the simulated NVG image.

  15. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  16. Qualitative evaluations and comparisons of six night-vision colorization methods

    Science.gov (United States)

    Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul

    2013-05-01

    Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF colorization and for quantitative evaluation using an objective metric such as objective evaluation index

  17. The absolute threshold of colour vision in the horse.

    Directory of Open Access Journals (Sweden)

    Lina S V Roth

    Full Text Available Arrhythmic mammals are active both during day and night if they are allowed. The arrhythmic horses are in possession of one of the largest terrestrial animal eyes and the purpose of this study is to reveal whether their eye is sensitive enough to see colours at night. During the day horses are known to have dichromatic colour vision. To disclose whether they can discriminate colours in dim light a behavioural dual choice experiment was performed. We started the training and testing at daylight intensities and the horses continued to choose correctly at a high frequency down to light intensities corresponding to moonlight. One Shetland pony mare, was able to discriminate colours at 0.08 cd/m(2, while a half blood gelding, still discriminated colours at 0.02 cd/m(2. For comparison, the colour vision limit for several human subjects tested in the very same experiment was also 0.02 cd/m(2. Hence, the threshold of colour vision for the horse that performed best was similar to that of the humans. The behavioural results are in line with calculations of the sensitivity of cone vision where the horse eye and human eye again are similar. The advantage of the large eye of the horse lies not in colour vision at night, but probably instead in achromatic tasks where presumably signal summation enhances sensitivity.

  18. A visual test based on a freeware software for quantifying and displaying night-vision disturbances: study in subjects after alcohol consumption.

    Science.gov (United States)

    Castro, José J; Ortiz, Carolina; Pozo, Antonio M; Anera, Rosario G; Soler, Margarita

    2014-05-07

    In this work, we propose the Halo test, a simple visual test based on a freeware software for quantifying and displaying night-vision disturbances perceived by subjects under different experimental conditions, more precisely studying the influence of the alcohol consumption on visual function. In the Halo test, viewed on a monitor, the subject's task consists of detecting luminous peripheral stimuli around a central high-luminance stimulus over a dark background. The test, performed by subjects before and after consuming alcoholic drinks, which deteriorate visual performance, evaluates the influence that alcohol consumption exerts on the visual-discrimination capacity under low illumination conditions. Measurements were made monocularly and binocularly. Pupil size was also measured in both conditions (pre/post). Additionally, we used a double-pass device to measure objectively the optical-quality of the eye and corroborate the results from the Halo test. We found a significant deterioration of the discrimination capacity after alcohol consumption, indicating that the higher the breath-alcohol content, the greater the deterioration of the visual-discrimination capacity. After alcohol intake, the graphical results showed a greater area of undetected peripheral stimuli around the central high-luminance stimulus. An enlargement of the pupil was also observed and the optical quality of the eye was deteriorated after alcohol consumption. A greater influence of halos and other night-vision disturbances were reported with the Halo test after alcohol consumption. The Halo freeware software constitutes a positive contribution for evaluating nighttime visual performance in clinical applications, such as reported here, but also in patients after refractive surgery (where halos are present) or for monitoring (time course) some ocular pathologies under pharmacological treatment.

  19. Nightly Test system migration

    CERN Document Server

    Win-Lime, Kevin

    2013-01-01

    The summer student program allows students to participate to the Cern adventure. They can follow several interesting lectures about particle science and participate to the experiment work. As a summer student, I had worked for LHCb experiment. LHCb uses a lot of software to analyze its data. All this software is organized in packages and projects. They are built and tested during the night using an automated system and the results are displayed on a web interface. Actually, LHCb is changing this system. It is looking for a replacement candidate. So I was charged to unify some internal interfaces to permit a swift migration. In this document, I will describe shortly the system used by LHCb, then I will explain what I have done in detail.

  20. Effects of age and illumination on night driving: a road test.

    Science.gov (United States)

    Owens, D Alfred; Wood, Joanne M; Owens, Justin M

    2007-12-01

    This study investigated the effects of drivers' age and low light on speed, lane keeping, and visual recognition of typical roadway stimuli. Poor visibility, which is exacerbated by age-related changes in vision, is a leading contributor to fatal nighttime crashes. There is little evidence, however, concerning the extent to which drivers recognize and compensate for their visual limitations at night. Young, middle-aged, and elder participants drove on a closed road course in day and night conditions at a "comfortable" speed without speedometer information. During night tests, headlight intensity was varied over a range of 1.5 log units using neutral density filters. Average speed and recognition of road signs decreased significantly as functions of increased age and reduced illumination. Recognition of pedestrians at night was significantly enhanced by retroreflective markings of limb joints as compared with markings of the torso, and this benefit was greater for middle-aged and elder drivers. Lane keeping showed nonlinear effects of lighting, which interacted with task conditions and drivers' lateral bias, indicating that older drivers drove more cautiously in low light. Consistent with the hypothesis that drivers misjudge their visual abilities at night, participants of all age groups failed to compensate fully for diminished visual recognition abilities in low light, although older drivers behaved more cautiously than the younger groups. These findings highlight the importance of educating all road users about the limitations of night vision and provide new evidence that retroreflective markings of the limbs can be of great benefit to pedestrians' safety at night.

  1. Blueberry effects on dark vision and recovery after photobleaching: placebo-controlled crossover studies.

    Science.gov (United States)

    Kalt, Wilhelmina; McDonald, Jane E; Fillmore, Sherry A E; Tremblay, Francois

    2014-11-19

    Clinical evidence for anthocyanin benefits in night vision is controversial. This paper presents two human trials investigating blueberry anthocyanin effects on dark adaptation, functional night vision, and vision recovery after retinal photobleaching. One trial, S2 (n = 72), employed a 3 week intervention and a 3 week washout, two anthocyanin doses (271 and 7.11 mg cyanidin 3-glucoside equivalents (C3g eq)), and placebo. The other trial, L1 (n = 59), employed a 12 week intervention and an 8 week washout and tested one dose (346 mg C3g eq) and placebo. In both S2 and L1 neither dark adaptation nor night vision was improved by anthocyanin intake. However, in both trials anthocyanin consumption hastened the recovery of visual acuity after photobleaching. In S2 both anthocyanin doses were effective (P = 0.014), and in L1 recovery was improved at 8 weeks (P = 0.027) and 12 weeks (P = 0.030). Although photobleaching recovery was hastened by anthocyanins, it is not known whether this improvement would have an impact on everyday vision.

  2. Multi-capability color night vision HD camera for defense, surveillance, and security

    Science.gov (United States)

    Pang, Francis; Powell, Gareth; Fereyre, Pierre

    2015-05-01

    e2v has developed a family of high performance cameras based on our next generation CMOS imagers that provide multiple features and capabilities to meet the range of challenging imaging applications in defense, surveillance, and security markets. Two resolution sizes are available: 1920x1080 with 5.3 μm pixels, and an ultra-low light level version at 1280x1024 with 10μm pixels. Each type is available in either monochrome or e2v's unique bayer pattern color version. The camera is well suited to accommodate many of the high demands for defense, surveillance, and security applications: compact form factor (SWAP+C), color night vision performance (down to 10-2 lux), ruggedized housing, Global Shutter, low read noise (<6e- in Global shutter mode and <2.5e- in Rolling shutter mode), 60 Hz frame rate, high QE especially in the enhanced NIR range (up to 1100nm). Other capabilities include active illumination and range gating. This paper will describe all the features of the sensor and the camera. It will be followed with a presentation of the latest test data with the current developments. Then, it will conclude with a description of how these features can be easily configured to meet many different applications. With this development, we can tune rather than create a full customization, making it more beneficial for many of our customers and their custom applications.

  3. A Most Rare Vision: Improvisations on "A Midsummer Night's Dream."

    Science.gov (United States)

    Hakaim, Charles J., Jr.

    1993-01-01

    Describes one teacher's methods for introducing to secondary English students the concepts of improvisation, experimentation, and innovation. Discusses numerous techniques for fostering such skills when working with William Shakespeare's "A Midsummer Night's Dream." (HB)

  4. The impact of the night float system on internal medicine residency programs.

    Science.gov (United States)

    Trontell, M C; Carson, J L; Taragin, M I; Duff, A

    1991-01-01

    To study the design, method of implementation, perceived benefits, and problems associated with a night float system. Self-administered questionnaire completed by program directors, which included both structured and open-ended questions. The answers reflect resident and student opinions as well as those of the program directors, since program directors regularly obtain feedback from these groups. The 442 accredited internal medicine residency programs listed in the 1988-89 Directory of Graduate Medical Education Programs. Of the 442 programs, 79% responded, and 30% had experience with a night float system. The most frequent methods for initiating a night float system included: decreasing elective time (42.3%), hiring more residents (26.9%), creating a non-teaching service (12.5%), and reallocating housestaff time (9.6%). Positive effects cited include decreased fatigue, improved housestaff morale, improved recruiting, and better attitude toward internal medicine training. The quality of medical care was considered the same or better by most programs using it. The most commonly cited problems were decreased continuity of care, inadequate teaching of the night float team, and miscommunication. Residency programs using a night float system usually observe a positive effect on housestaff morale, recruitment, and working hours and no detrimental effect on the quality of patient care. Miscommunication and inadequate learning experience for the night float team are important potential problems. This survey suggests that the night float represents one solution to reducing resident working hours.

  5. Pleiades Visions

    Science.gov (United States)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  6. Effects of a night-team system on resident sleep and work hours.

    Science.gov (United States)

    Chua, Kao-Ping; Gordon, Mary Beth; Sectish, Theodore; Landrigan, Christopher P

    2011-12-01

    In 2009, Children's Hospital Boston implemented a night-team system on general pediatric wards to reduce extended work shifts. Residents worked 5 consecutive nights for 1 week and worked day shifts for the remainder of the rotation. Of note, resident staffing at night decreased under this system. The objective of this study was to assess the effects of this system on resident sleep and work hours. We conducted a prospective cohort study in which residents on the night-team system logged their sleep and work hours on work days. These data were compared with similar data collected in 2004, when there was a traditional call system. In 2004 and 2009, mean shift length was 15.22 ± 6.86 and 12.92 ± 5.70 hours, respectively (P = .161). Daily work hours were 10.49 ± 6.85 and 8.79 ± 6.42 hours, respectively (P = .08). Nightly sleep time decreased from 6.72 ± 2.60 to 4.77 ± 2.46 hours (P team system was unexpectedly associated with decreased sleep hours. As residency programs create work schedules that are compliant with the 2011 Accreditation Council for Graduate Medical Education duty-hour standards, resident sleep should be monitored carefully.

  7. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  8. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  9. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  10. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  11. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  12. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  13. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  14. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  15. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  16. Subjective evaluation of uncorrected vision in patients undergoing cataract surgery with (diffractive multifocal lenses and monovision

    Directory of Open Access Journals (Sweden)

    Stock RA

    2017-07-01

    Full Text Available Ricardo Alexandre Stock, Thaís Thumé, Luan Gabriel Paese, Elcio Luiz Bonamigo Universidade do Oeste de Santa Catarina, Rua Getúlio Vargas, Joaçaba, Santa Catarina, Brazil Purpose: To analyze patient satisfaction and difficulties with bilateral multifocal intraocular lenses (IOLs implantation and aspheric monofocal IOLs implantation using monovision, after cataract surgery.Materials and methods: A total of 61 participants were included in the study, 29 with monovision and 32 with multifocal lenses. The inclusion criteria were patients undergoing phacoemulsification for bilateral visual impairment due to cataracts and presenting with postoperative visual acuity of 20/30 or better for distance and line J3 or better for near vision.Results: The 2 groups had similar results regarding difficulties with daily activities such as distance vision, near vision, watching television, reading, cooking, using a computer or cellphone, shaving/putting on makeup and shopping. There were differences in responses between the groups regarding difficulty with night vision (P=0.0565 and night driving (P=0.0291. Degree of satisfaction in terms of distance vision without glasses was statistically significantly better in monovision group (P=0.0332, but not for near (P=0.9101.Conclusion: Both techniques yielded satisfactory results regarding visual acuity for different activities without the need to use glasses. Multifocal lenses are a good option for patients with the exception of night driving, and who desire independence from glasses. Keywords: cataract extraction, aphakia, postcataract, patient satisfaction, night vision

  17. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  18. Health system vision of iran in 2025.

    Science.gov (United States)

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  19. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  20. LWIR passive perception system for stealthy unmanned ground vehicle night operations

    Science.gov (United States)

    Lee, Daren; Rankin, Arturo; Huertas, Andres; Nash, Jeremy; Ahuja, Gaurav; Matthies, Larry

    2016-05-01

    Resupplying forward-deployed units in rugged terrain in the presence of hostile forces creates a high threat to manned air and ground vehicles. An autonomous unmanned ground vehicle (UGV) capable of navigating stealthily at night in off-road and on-road terrain could significantly increase the safety and success rate of such resupply missions for warfighters. Passive night-time perception of terrain and obstacle features is a vital requirement for such missions. As part of the ONR 30 Autonomy Team, the Jet Propulsion Laboratory developed a passive, low-cost night-time perception system under the ONR Expeditionary Maneuver Warfare and Combating Terrorism Applied Research program. Using a stereo pair of forward looking LWIR uncooled microbolometer cameras, the perception system generates disparity maps using a local window-based stereo correlator to achieve real-time performance while maintaining low power consumption. To overcome the lower signal-to-noise ratio and spatial resolution of LWIR thermal imaging technologies, a series of pre-filters were applied to the input images to increase the image contrast and stereo correlator enhancements were applied to increase the disparity density. To overcome false positives generated by mixed pixels, noisy disparities from repeated textures, and uncertainty in far range measurements, a series of consistency, multi-resolution, and temporal based post-filters were employed to improve the fidelity of the output range measurements. The stereo processing leverages multi-core processors and runs under the Robot Operating System (ROS). The night-time passive perception system was tested and evaluated on fully autonomous testbed ground vehicles at SPAWAR Systems Center Pacific (SSC Pacific) and Marine Corps Base Camp Pendleton, California. This paper describes the challenges, techniques, and experimental results of developing a passive, low-cost perception system for night-time autonomous navigation.

  1. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  2. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  3. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  4. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  5. Vision in the nocturnal wandering spider Leucorchestris arenicola (Araneae: Sparassidae)

    DEFF Research Database (Denmark)

    Nørgaard, Thomas; Nilsson, Dan-Eric; Henschel, Joh R

    2008-01-01

    At night the Namib Desert spider Leucorchestris arenicola performs long-distance homing across its sand dune habitat. By disabling all or pairs of the spiders' eight eyes we found that homing ability was severely reduced when vision was fully abolished. Vision, therefore, seems to play a key role...... in the posterior and anteriomedian eyes, and at approximately 540 nm in the anteriolateral eyes. Theoretical calculations of photon catches showed that the eyes are likely to employ a combination of spatial and temporal pooling in order to function at night. Under starlit conditions, the raw spatial and temporal...... resolution of the eyes is insufficient for detecting any visual information on structures in the landscape, and bright stars would be the only objects visible to the spiders. However, by summation in space and time, the spiders can rescue enough vision to detect coarse landscape structures. We show that L...

  6. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  7. Limits of colour vision in dim light.

    Science.gov (United States)

    Kelber, Almut; Lind, Olle

    2010-09-01

    Humans and most vertebrates have duplex retinae with multiple cone types for colour vision in bright light, and one single rod type for achromatic vision in dim light. Instead of comparing signals from multiple spectral types of photoreceptors, such species use one highly sensitive receptor type thus improving the signal-to-noise ratio at night. However, the nocturnal hawkmoth Deilephila elpenor, the nocturnal bee Xylocopa tranquebarica and the nocturnal gecko Tarentola chazaliae can discriminate colours at extremely dim light intensities. To be able to do so, they sacrifice spatial and temporal resolution in favour of colour vision. We review what is known about colour vision in dim light, and compare colour vision thresholds with the optical sensitivity of the photoreceptors in selected animal species with lens and compound eyes. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  8. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  9. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  10. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  11. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  12. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  13. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  14. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  15. Distance Estimation to Flashes in a Simulated Night Vision Environment

    Science.gov (United States)

    2007-12-01

    s)) 52 DRDC Toronto TR 2007-143 I understand that by signing this consent form I have not waived any legal rights I may have as...pilots. [L’adattamento alla visione notturna in gruppi omogenei di piloti di velivoli plurimotori e di non piloti.] Rivista di medicina aeronautica e

  16. Night market contact lens-related corneal ulcer: Should we increase public awareness?

    Directory of Open Access Journals (Sweden)

    Umi Kalthum Md Noh

    2015-07-01

    Full Text Available A 21-year-old Malay woman presented with a 4-day history of left eye progressive painful blurring of vision due to cosmetic CL wear. She had always bought her CLs from a night market and disposed it after every 3–4 months. She had a very poor CL hygiene regime and continuously wore the lenses for more than 8 hours daily. Prior to the presentation, she had been using a combination of steroid and antibiotic eye drop as prescribed by a general practitioner whom she had consulted earlier for similar complaints of eye redness and pain associated with reduced vision. Her condition and vision deteriorated after 2 days of medication instillation.

  17. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  18. Visual summation in night-flying sweat bees: a theoretical study.

    Science.gov (United States)

    Theobald, Jamie Carroll; Greiner, Birgit; Wcislo, William T; Warrant, Eric J

    2006-07-01

    Bees are predominantly diurnal; only a few groups fly at night. An evolutionary limitation that bees must overcome to inhabit dim environments is their eye type: bees possess apposition compound eyes, which are poorly suited to vision in dim light. Here, we theoretically examine how nocturnal bees Megalopta genalis fly at light levels usually reserved for insects bearing more sensitive superposition eyes. We find that neural summation should greatly increase M. genalis's visual reliability. Predicted spatial summation closely matches the morphology of laminal neurons believed to mediate such summation. Improved reliability costs acuity, but dark adapted bees already suffer optical blurring, and summation further degrades vision only slightly.

  19. Müller cells separate between wavelengths to improve day vision with minimal effect upon night vision.

    Science.gov (United States)

    Labin, Amichai M; Safuri, Shadi K; Ribak, Erez N; Perlman, Ido

    2014-07-08

    Vision starts with the absorption of light by the retinal photoreceptors-cones and rods. However, due to the 'inverted' structure of the retina, the incident light must propagate through reflecting and scattering cellular layers before reaching the photoreceptors. It has been recently suggested that Müller cells function as optical fibres in the retina, transferring light illuminating the retinal surface onto the cone photoreceptors. Here we show that Müller cells are wavelength-dependent wave-guides, concentrating the green-red part of the visible spectrum onto cones and allowing the blue-purple part to leak onto nearby rods. This phenomenon is observed in the isolated retina and explained by a computational model, for the guinea pig and the human parafoveal retina. Therefore, light propagation by Müller cells through the retina can be considered as an integral part of the first step in the visual process, increasing photon absorption by cones while minimally affecting rod-mediated vision.

  20. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  1. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  2. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  3. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  4. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. The autonomous vision system on TeamSat

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Riis, Troels

    1999-01-01

    The second qualification flight of Ariane 5 blasted off-the European Space Port in French Guiana on October 30, 1997, carrying on board a small technology demonstration satellite called TeamSat. Several experiments were proposed by various universities and research institutions in Europe and five...... of them were finally selected and integrated into TeamSat, namely FIPEX, VTS, YES, ODD and the Autonomous Vision System, AVS, a fully autonomous star tracker and vision system. This paper gives short overview of the TeamSat satellite; design, implementation and mission objectives. AVS is described in more...

  6. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  7. Development of a Configurable Growth Chamber with a Computer Vision System to Study Circadian Rhythm in Plants

    Directory of Open Access Journals (Sweden)

    Marcos Egea-Cortines

    2012-11-01

    Full Text Available Plant development is the result of an endogenous morphogenetic program that integrates environmental signals. The so-called circadian clock is a set of genes that integrates environmental inputs into an internal pacing system that gates growth and other outputs. Study of circadian growth responses requires high sampling rates to detect changes in growth and avoid aliasing. We have developed a flexible configurable growth chamber comprising a computer vision system that allows sampling rates ranging between one image per 30 s to hours/days. The vision system has a controlled illumination system, which allows the user to set up different configurations. The illumination system used emits a combination of wavelengths ensuring the optimal growth of species under analysis. In order to obtain high contrast of captured images, the capture system is composed of two CCD cameras, for day and night periods. Depending on the sample type, a flexible image processing software calculates different parameters based on geometric calculations. As a proof of concept we tested the system in three different plant tissues, growth of petunia- and snapdragon (Antirrhinum majus flowers and of cladodes from the cactus Opuntia ficus-indica. We found that petunia flowers grow at a steady pace and display a strong growth increase in the early morning, whereas Opuntia cladode growth turned out not to follow a circadian growth pattern under the growth conditions imposed. Furthermore we were able to identify a decoupling of increase in area and length indicating that two independent growth processes are responsible for the final size and shape of the cladode.

  8. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  9. Localization of a novel X-linked congenital stationary night blindness locus: close linkage to the RP3 type retinitis pigmentosa gene region

    NARCIS (Netherlands)

    Bergen, A. A.; ten Brink, J. B.; Riemslag, F.; Schuurman, E. J.; Tijmes, N.

    1995-01-01

    X-linked congenital stationary night blindness (CSNBX) is a non-progressive retinal disorder characterized by decreased visual acuity and loss of night vision. CSNBX is clinically heterogeneous with respect to the involvement of retinal rods and/or cones in the disease. In this study, we localize a

  10. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  11. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. Visual Problems in Night Operations (Problemes de Vision dans les Operations de Nuit)

    Science.gov (United States)

    1992-05-01

    conference proceedings (FR) Progrqbs r~cents concernant l05 aides au pilote clans I habitacle d’un aaroi’ef milltaire en operations. Nouveau drone ...allleurs) traitelit doe la sophistication des La firme isra~lienne tAt teste un nouveau drone appel6 ’Impact’. tacnes domanda~es aus pilotes d-avionS- doe...une camara do tala6vision A vision nocturne. 151 met aussl relations liomlme-macfline.ces syst~mws (lndiff~remesmnt dasignds au point une station

  14. Use of Circadian Lighting System to improve night shift alertness and performance of NRC Headquarters Operations Officers

    International Nuclear Information System (INIS)

    Baker, T.L.; Morisseau, D.; Murphy, N.M.

    1995-01-01

    The Nuclear Regulatory Commission's (NRC) Headquarters Operations Officers (HOOs) receive and respond to events reported in the nuclear industry on a 24-hour basis. The HOOs have reported reduced alertness on the night shift, leading to a potential deterioration in their on-shift cognitive performance during the early morning hours. For some HOOs, maladaptation to the night shift was also reported to be the principal cause of: (a) reduced alertness during the commute to and from work, (b) poor sleep quality, and (c) personal lifestyle problems. ShiftWork Systems, Inc. (SWS) designed and installed a Circadian Lighting System (CLS) at both the Bethesda and Rockville HOO stations with the goal of facilitating the HOOs physiological adjustment to their night shift schedules. The data indicate the following findings: less subjective fatigue on night shifts; improved night shift alertness and mental performance; higher HOO confidence in their ability to assess event reports; longer, deeper and more restorative day sleep after night duty shifts; swifter adaptation to night work; and a safer commute, particularly for those with extensive drives

  15. Use of Circadian Lighting System to improve night shift alertness and performance of NRC Headquarters Operations Officers

    Energy Technology Data Exchange (ETDEWEB)

    Baker, T.L.; Morisseau, D.; Murphy, N.M. [ShiftWork Systems, Cambridge, MA (United States)] [and others

    1995-04-01

    The Nuclear Regulatory Commission`s (NRC) Headquarters Operations Officers (HOOs) receive and respond to events reported in the nuclear industry on a 24-hour basis. The HOOs have reported reduced alertness on the night shift, leading to a potential deterioration in their on-shift cognitive performance during the early morning hours. For some HOOs, maladaptation to the night shift was also reported to be the principal cause of: (a) reduced alertness during the commute to and from work, (b) poor sleep quality, and (c) personal lifestyle problems. ShiftWork Systems, Inc. (SWS) designed and installed a Circadian Lighting System (CLS) at both the Bethesda and Rockville HOO stations with the goal of facilitating the HOOs physiological adjustment to their night shift schedules. The data indicate the following findings: less subjective fatigue on night shifts; improved night shift alertness and mental performance; higher HOO confidence in their ability to assess event reports; longer, deeper and more restorative day sleep after night duty shifts; swifter adaptation to night work; and a safer commute, particularly for those with extensive drives.

  16. Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD Vision Sensor with Artificial Illumination

    Directory of Open Access Journals (Sweden)

    Juntao Xiong

    2018-03-01

    Full Text Available Night-time fruit-picking technology is important to picking robots. This paper proposes a method of night-time detection and picking-point positioning for green grape-picking robots to solve the difficult problem of green grape detection and picking in night-time conditions with artificial lighting systems. Taking a representative green grape named Centennial Seedless as the research object, daytime and night-time grape images were captured by a custom-designed visual system. Detection was conducted employing the following steps: (1 The RGB (red, green and blue. Color model was determined for night-time green grape detection through analysis of color features of grape images under daytime natural light and night-time artificial lighting. The R component of the RGB color model was rotated and the image resolution was compressed; (2 The improved Chan–Vese (C–V level set model and morphological processing method were used to remove the background of the image, leaving out the grape fruit; (3 Based on the character of grape vertical suspension, combining the principle of the minimum circumscribed rectangle of fruit and the Hough straight line detection method, straight-line fitting for the fruit stem was conducted and the picking point was calculated using the stem with an angle of fitting line and vertical line less than 15°. The visual detection experiment results showed that the accuracy of grape fruit detection was 91.67% and the average running time of the proposed algorithm was 0.46 s. The picking-point calculation experiment results showed that the highest accuracy for the picking-point calculation was 92.5%, while the lowest was 80%. The results demonstrate that the proposed method of night-time green grape detection and picking-point calculation can provide technical support to the grape-picking robots.

  17. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  18. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  19. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  20. Day-night contrast as source of health for the human circadian system.

    Science.gov (United States)

    Martinez-Nicolas, Antonio; Madrid, Juan Antonio; Rol, Maria Angeles

    2014-04-01

    Modern societies are characterized by a 24/7 lifestyle (LS) with no environmental differences between day and night, resulting in weak zeitgebers (weak day light, absence of darkness during night, constant environmental temperature, sedentary LS and frequent snacking), and as a consequence, in an impaired circadian system (CS) through a process known as chronodisruption. Both weak zeitgebers and CS impairment are related to human pathologies (certain cancers, metabolic syndrome and affective and cognitive disorders), but little is known about how to chronoenhance the CS. The aim of this work is to propose practical strategies for chronoenhancement, based on accentuating the day/night contrast. For this, 131 young subjects were recruited, and their wrist temperature (WT), activity, body position, light exposure, environmental temperature and sleep were recorded under free-living conditions for 1 week. Subjects with high contrast (HC) and low contrast (LC) for each variable were selected to analyze the HC effect in activity, body position, environmental temperature, light exposure and sleep would have on WT. We found that HC showed better rhythms than LC for every variable except sleep. Subjects with HC and LC for WT also demonstrated differences in LS, where HC subjects had a slightly advanced night phase onset and a general increase in day/night contrast. In addition, theoretical high day/night contrast calculated using mathematical models suggests an improvement by means of LS contrast. Finally, some individuals classified as belonging to the HC group in terms of WT when they are exposed to the LS characteristic of the LC group, while others exhibit WT arrhythmicity despite their good LS habits, revealing two different WT components: an exogenous component modified by LS and another endogenous component that is refractory to it. Therefore, intensifying day/night contrast in subject's LS has proven to be a feasible measure to chronoenhance the CS.

  1. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  2. Organization and management of ATLAS nightly builds

    International Nuclear Information System (INIS)

    Luehring, F; Obreshkov, E; Quarrie, D; Rybkine, G; Undrus, A

    2010-01-01

    The automated multi-platform software nightly build system is a major component in the ATLAS collaborative software organization, validation and code approval schemes. Code developers from ATLAS participating Institutes spread all around the world use about 30 branches of nightly releases for testing new packages, verification of patches to existing software, and migration to new platforms and compilers. The nightly releases lead up to, and are the basis of, stable software releases used for data processing worldwide. The ATLAS nightly builds are managed by the fully automated NICOS framework on the computing farm with 44 powerful multiprocessor nodes. The ATN test tool is embedded within the nightly system and provides results shortly after full compilations complete. Other test frameworks are synchronized with NICOS jobs and run larger scale validation jobs using the nightly releases. NICOS web pages dynamically provide information about the progress and results of the builds. For faster feedback, E-mail notifications about nightly releases problems are automatically distributed to the developers responsible.

  3. Intensity measurement of automotive headlamps using a photometric vision system

    Science.gov (United States)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  4. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  5. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    Science.gov (United States)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  6. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  7. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  8. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  9. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    Science.gov (United States)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  10. Development of Vision System for Dimensional Measurement for Irradiated Fuel Assembly

    International Nuclear Information System (INIS)

    Shin, Jungcheol; Kwon, Yongbock; Park, Jongyoul; Woo, Sangkyun; Kim, Yonghwan; Jang, Youngki; Choi, Joonhyung; Lee, Kyuseog

    2006-01-01

    In order to develop an advanced nuclear fuel, a series of pool side examination (PSE) is performed to confirm in-pile behavior of the fuel for commercial production. For this purpose, a vision system was developed to measure for mechanical integrity, such as assembly bowing, twist and growth, of the loaded lead test assembly. Using this vision system, three(3) times of PSE were carried out at Uljin Unit 3 and Kori Unit 2 for the advanced fuels, PLUS7 TM and 16ACE7 TM , developed by KNFC. Among the main characteristics of the vision system is very simple structure and measuring principal. This feature enables the equipment installation and inspection time to reduce largely, and leads the PSE can be finished without disturbance on the fuel loading and unloading activities during utility overhaul periods. And another feature is high accuracy and repeatability achieved by this vision system

  11. Night myopia studied with an adaptive optics visual analyzer.

    Directory of Open Access Journals (Sweden)

    Pablo Artal

    Full Text Available PURPOSE: Eyes with distant objects in focus in daylight are thought to become myopic in dim light. This phenomenon, often called "night myopia" has been studied extensively for several decades. However, despite its general acceptance, its magnitude and causes are still controversial. A series of experiments were performed to understand night myopia in greater detail. METHODS: We used an adaptive optics instrument operating in invisible infrared light to elucidate the actual magnitude of night myopia and its main causes. The experimental setup allowed the manipulation of the eye's aberrations (and particularly spherical aberration as well as the use of monochromatic and polychromatic stimuli. Eight subjects with normal vision monocularly determined their best focus position subjectively for a Maltese cross stimulus at different levels of luminance, from the baseline condition of 20 cd/m(2 to the lowest luminance of 22 × 10(-6 cd/m(2. While subjects performed the focusing tasks, their eye's defocus and aberrations were continuously measured with the 1050-nm Hartmann-Shack sensor incorporated in the adaptive optics instrument. The experiment was repeated for a variety of controlled conditions incorporating specific aberrations of the eye and chromatic content of the stimuli. RESULTS: We found large inter-subject variability and an average of -0.8 D myopic shift for low light conditions. The main cause responsible for night myopia was the accommodation shift occurring at low light levels. Other factors, traditionally suggested to explain night myopia, such as chromatic and spherical aberrations, have a much smaller effect in this mechanism. CONCLUSIONS: An adaptive optics visual analyzer was applied to study the phenomenon of night myopia. We found that the defocus shift occurring in dim light is mainly due to accommodation errors.

  12. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  13. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  14. 5 CFR 532.505 - Night shift differentials.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Night shift differentials. 532.505... PREVAILING RATE SYSTEMS Premium Pay and Differentials § 532.505 Night shift differentials. (a) Employees shall be entitled to receive night shift differentials in accordance with section 5343 of title 5...

  15. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    International Nuclear Information System (INIS)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability

  16. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability.

  17. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  18. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  19. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  20. Clinical Characteristics, Mutation Spectrum, and Prevalence of Åland Eye Disease/Incomplete Congenital Stationary Night Blindness in Denmark

    DEFF Research Database (Denmark)

    Hove, Marianne N; Kilic-Biyik, Kevser Z; Trotter, Alana

    2016-01-01

    Purpose: To assess clinical characteristics, foveal structure, mutation spectrum, and prevalence rate of Åland eye disease (AED)/incomplete congenital stationary night blindness (iCSNB). Methods: A retrospective survey included individuals diagnosed with AED at a national low-vision center from...

  1. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  2. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  3. Surface Casting Defects Inspection Using Vision System and Neural Network Techniques

    Directory of Open Access Journals (Sweden)

    Świłło S.J.

    2013-12-01

    Full Text Available The paper presents a vision based approach and neural network techniques in surface defects inspection and categorization. Depending on part design and processing techniques, castings may develop surface discontinuities such as cracks and pores that greatly influence the material’s properties Since the human visual inspection for the surface is slow and expensive, a computer vision system is an alternative solution for the online inspection. The authors present the developed vision system uses an advanced image processing algorithm based on modified Laplacian of Gaussian edge detection method and advanced lighting system. The defect inspection algorithm consists of several parameters that allow the user to specify the sensitivity level at which he can accept the defects in the casting. In addition to the developed image processing algorithm and vision system apparatus, an advanced learning process has been developed, based on neural network techniques. Finally, as an example three groups of defects were investigated demonstrates automatic selection and categorization of the measured defects, such as blowholes, shrinkage porosity and shrinkage cavity.

  4. The role of vision processing in prosthetic vision.

    Science.gov (United States)

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  5. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  6. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  7. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  8. Machine vision system for measuring conifer seedling morphology

    Science.gov (United States)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  9. Hunting in bioluminescent light: Vision in the nocturnal box jellyfish Copula sivickisi

    Directory of Open Access Journals (Sweden)

    Anders eGarm

    2016-03-01

    Full Text Available Cubomedusae all have a similar set of six eyes on each of their four rhopalia. Still, there is a great variation in activity patterns with some species being strictly day active while others are strictly night active. Here we have examined the visual ecology of the medusa of the night active Copula sivickisi from Okinawa using optics, morphology, electrophysiology, and behavioural experiments. We found the lenses of both the upper and the lower lens eyes to be image forming but under-focused, resulting in low spatial resolution in the order of 10 – 15 degrees. The photoreceptor physiology is similar in the two lens eyes and they have a single opsin peaking around 460 nm and low temporal resolution with a flicker fusion frequency (fff of 2.5 Hz indicating adaptions to vision in low light intensities. Further, the outer segments have fluid filled swellings, which may concentrate the light in the photoreceptor membrane by total internal reflections, and thus enhance the signal to noise ratio in the eyes. Finally our behavioural experiments confirmed that the animals use vision when hunting. When they are active at night they seek out high prey-concentration by visual attraction to areas with abundant bioluminescent flashes triggered by their prey.

  10. Visionary Critique. Gender, Self and Relationship in Rosetta and Two Days, One Night

    Directory of Open Access Journals (Sweden)

    Stephanie Knauss

    2016-11-01

    Full Text Available The films of Jean-Pierre and Luc Dardenne stand out for their complex, multi-dimen¬sional female and male characters whose representation disrupts gender stereotypes in numerous ways, both in how the characters themselves are depicted and in how they are shown to relate to other individuals and their social context. In this contri¬bution, I explore the themes of self, relationship, solidarity, family and work – all of them recurring issues in the films by the Dardennes – using gender as my primary category of analysis, and focusing in particular on the treatment of these themes in Rosetta (Jean-Pierre and Luc Dardenne, FR/BE 1999 and Deux jours, une nuit (Two Days, One Night, Jean-Pierre and Luc Dardenne, BE/FR/IT 2014. I argue that whereas Rosetta (1999 offers a critique of the damaging effects of the masculinized capital¬ist system on individuals and their relationships, Two Days, One Night (2014 can be understood as a vision of alternative possibilities of solidarity and women’s empower¬ment and agency even within the persistent context of masculinized capitalism.

  11. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  12. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  13. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  14. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  15. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  16. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  17. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  18. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  19. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  20. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  1. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  2. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  3. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  4. Color Calibration for Colorized Vision System with Digital Sensor and LED Array Illuminator

    Directory of Open Access Journals (Sweden)

    Zhenmin Zhu

    2016-01-01

    Full Text Available Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIE  L⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.

  5. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  6. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    Science.gov (United States)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  7. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  8. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  9. The Circadian System Contributes to Apnea Lengthening across the Night in Obstructive Sleep Apnea.

    Science.gov (United States)

    Butler, Matthew P; Smales, Carolina; Wu, Huijuan; Hussain, Mohammad V; Mohamed, Yusef A; Morimoto, Miki; Shea, Steven A

    2015-11-01

    To test the hypothesis that respiratory event duration exhibits an endogenous circadian rhythm. Within-subject and between-subjects. Inpatient intensive physiologic monitoring unit at the Brigham and Women's Hospital. Seven subjects with moderate/severe sleep apnea and four controls, age 48 (SD = 12) years, 7 males. Subjects completed a 5-day inpatient protocol in dim light. Polysomnography was recorded during an initial control 8-h night scheduled at the usual sleep time, then through 10 recurrent cycles of 2 h 40 min sleep and 2 h 40 min wake evenly distributed across all circadian phases, and finally during another 8-h control sleep period. Event durations, desaturations, and apnea-hypopnea index for each sleep opportunity were assessed according to circadian phase (derived from salivary melatonin), time into sleep, and sleep stage. Average respiratory event durations in NREM sleep significantly lengthened across both control nights (21.9 to 28.2 sec and 23.7 to 30.2 sec, respectively). During the circadian protocol, event duration in NREM increased across the circadian phases that corresponded to the usual sleep period, accounting for > 50% of the increase across normal 8-h control nights. AHI and desaturations were also rhythmic: AHI was highest in the biological day while desaturations were greatest in the biological night. The endogenous circadian system plays an important role in the prolongation of respiratory events across the night, and might provide a novel therapeutic target for modulating sleep apnea. © 2015 Associated Professional Sleep Societies, LLC.

  10. Modelling and Analysis of Vibrations in a UAV Helicopter with a Vision System

    Directory of Open Access Journals (Sweden)

    G. Nicolás Marichal Plasencia

    2012-11-01

    Full Text Available The analysis of the nature and damping of unwanted vibrations on Unmanned Aerial Vehicle (UAV helicopters are important tasks when images from on-board vision systems are to be obtained. In this article, the authors model a UAV system, generate a range of vibrations originating in the main rotor and design a control methodology in order to damp these vibrations. The UAV is modelled using VehicleSim, the vibrations that appear on the fuselage are analysed to study their effects on the on-board vision system by using Simmechanics software. Following this, the authors present a control method based on an Adaptive Neuro-Fuzzy Inference System (ANFIS to achieve satisfactory damping results over the vision system on board.

  11. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  12. 76 FR 27372 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Science.gov (United States)

    2011-05-11

    ... waiver: PVS-14, PVS-17, and AVS-9 night vision systems. However, SBA has identified, through market... component assemblers exist for PVS-14, PVS-17, and AVS-9 night vision systems, and, as such, these items do...

  13. Background staining of visualization systems in immunohistochemistry: comparison of the Avidin-Biotin Complex system and the EnVision+ system.

    Science.gov (United States)

    Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu

    2007-03-01

    The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.

  14. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  15. 'Irrigation by night' in the Eastern Cape, South Africa

    African Journals Online (AJOL)

    2017-01-01

    Jan 1, 2017 ... tions in irrigation systems in the night: 'It is common place that the night is the time ..... roads and rainwater tanks ('JoJo's') to water the gardens. The ..... drainage system throughout the home garden, but also directly from the ...

  16. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  17. Night-to-night arousal variability and interscorer reliability of arousal measurements.

    Science.gov (United States)

    Loredo, J S; Clausen, J L; Ancoli-Israel, S; Dimsdale, J E

    1999-11-01

    Measurement of arousals from sleep is clinically important, however, their definition is not well standardized, and little data exist on reliability. The purpose of this study is to determine factors that affect arousal scoring reliability and night-to-night arousal variability. The night-to-night arousal variability and interscorer reliability was assessed in 20 subjects with and without obstructive sleep apnea undergoing attended polysomnography during two consecutive nights. Five definitions of arousal were studied, assessing duration of electroencephalographic (EEG) frequency changes, increases in electromyographic (EMG) activity and leg movement, association with respiratory events, as well as the American Sleep Disorders Association (ASDA) definition of arousals. NA. NA. NA. Interscorer reliability varied with the definition of arousal and ranged from an Intraclass correlation (ICC) of 0.19 to 0.92. Arousals that included increases in EMG activity or leg movement had the greatest reliability, especially when associated with respiratory events (ICC 0.76 to 0.92). The ASDA arousal definition had high interscorer reliability (ICC 0.84). Reliability was lowest for arousals consisting of EEG changes lasting <3 seconds (ICC 0.19 to 0.37). The within subjects night-to-night arousal variability was low for all arousal definitions In a heterogeneous population, interscorer arousal reliability is enhanced by increases in EMG activity, leg movements, and respiratory events and decreased by short duration EEG arousals. The arousal index night-to-night variability was low for all definitions.

  18. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  19. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  20. Vision and dual IMU integrated attitude measurement system

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  1. Diagnosing night sweats.

    Science.gov (United States)

    Viera, Anthon J; Bond, Michael M; Yates, Scott W

    2003-03-01

    Night sweats are a common outpatient complaint, yet literature on the subject is scarce. Tuberculosis and lymphoma are diseases in which night sweats are a dominant symptom, but these are infrequently found to be the cause of night sweats in modern practice. While these diseases remain important diagnostic considerations in patients with night sweats, other diagnoses to consider include human immunodeficiency virus, gastroesophageal reflux disease, obstructive sleep apnea, hyperthyroidism, hypoglycemia, and several less common diseases. Antihypertensives, antipyretics, other medications, and drugs of abuse such as alcohol and heroin may cause night sweats. Serious causes of night sweats can be excluded with a thorough history, physical examination, and directed laboratory and radiographic studies. If a history and physical do not reveal a possible diagnosis, physicians should consider a purified protein derivative, complete blood count, human immunodeficiency virus test, thyroid-stimulating hormone test, erythrocyte sedimentation rate evaluation, chest radiograph, and possibly chest and abdominal computed tomographic scans and bone marrow biopsy.

  2. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  3. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  4. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  5. Passive ventilation systems with heat recovery and night cooling

    DEFF Research Database (Denmark)

    Hviid, Christian Anker; Svendsen, Svend

    2008-01-01

    with little energy consumption and with satisfying indoor climate. The concept is based on using passive measures like stack and wind driven ventilation, effective night cooling and low pressure loss heat recovery using two fluid coupled water-to-air heat exchangers developed at the Technical University......In building design the requirements for energy consumption for ventilation, heating and cooling and the requirements for increasingly better indoor climate are two opposing factors. This paper presents the schematic layout and simulation results of an innovative multifunc-tional ventilation concept...... of Denmark. Through building integration in high performance offices the system is optimized to incorporate multiple functions like heating, cooling and ventilation, thus saving the expenses of separate cooling and heating systems. The simulation results are derived using the state-of-the-art building...

  6. Restoration of vision after transplantation of photoreceptors.

    Science.gov (United States)

    Pearson, R A; Barber, A C; Rizzi, M; Hippert, C; Xue, T; West, E L; Duran, Y; Smith, A J; Chuang, J Z; Azam, S A; Luhmann, U F O; Benucci, A; Sung, C H; Bainbridge, J W; Carandini, M; Yau, K-W; Sowden, J C; Ali, R R

    2012-05-03

    Cell transplantation is a potential strategy for treating blindness caused by the loss of photoreceptors. Although transplanted rod-precursor cells are able to migrate into the adult retina and differentiate to acquire the specialized morphological features of mature photoreceptor cells, the fundamental question remains whether transplantation of photoreceptor cells can actually improve vision. Here we provide evidence of functional rod-mediated vision after photoreceptor transplantation in adult Gnat1−/− mice, which lack rod function and are a model of congenital stationary night blindness. We show that transplanted rod precursors form classic triad synaptic connections with second-order bipolar and horizontal cells in the recipient retina. The newly integrated photoreceptor cells are light-responsive with dim-flash kinetics similar to adult wild-type photoreceptors. By using intrinsic imaging under scotopic conditions we demonstrate that visual signals generated by transplanted rods are projected to higher visual areas, including V1. Moreover, these cells are capable of driving optokinetic head tracking and visually guided behaviour in the Gnat1−/− mouse under scotopic conditions. Together, these results demonstrate the feasibility of photoreceptor transplantation as a therapeutic strategy for restoring vision after retinal degeneration.

  7. Vision, eye disease, and art: 2015 Keeler Lecture.

    Science.gov (United States)

    Marmor, M F

    2016-02-01

    The purpose of this study was to examine normal vision and eye disease in relation to art. Ophthalmology cannot explain art, but vision is a tool for artists and its normal and abnormal characteristics may influence what an artist can do. The retina codes for contrast, and the impact of this is evident throughout art history from Asian brush painting, to Renaissance chiaroscuro, to Op Art. Art exists, and can portray day or night, only because of the way retina adjusts to light. Color processing is complex, but artists have exploited it to create shimmer (Seurat, Op Art), or to disconnect color from form (fauvists, expressionists, Andy Warhol). It is hazardous to diagnose eye disease from an artist's work, because artists have license to create as they wish. El Greco was not astigmatic; Monet was not myopic; Turner did not have cataracts. But when eye disease is documented, the effects can be analyzed. Color-blind artists limit their palette to ambers and blues, and avoid greens. Dense brown cataracts destroy color distinctions, and Monet's late canvases (before surgery) showed strange and intense uses of color. Degas had failing vision for 40 years, and his pastels grew coarser and coarser. He may have continued working because his blurred vision smoothed over the rough work. This paper can barely touch upon the complexity of either vision or art. However, it demonstrates some ways in which understanding vision and eye disease give insight into art, and thereby an appreciation of both art and ophthalmology.

  8. Vision and laterality: does occlusion disclose a feedback processing advantage for the right hand system?

    Science.gov (United States)

    Buekers, M J; Helsen, W F

    2000-09-01

    The main purpose of this study was to examine whether manual asymmetries could be related to the superiority of the left hemisphere/right hand system in processing visual feedback. Subjects were tested when performing single (Experiment 1) and reciprocal (Experiment 2) aiming movements under different vision conditions (full vision, 20 ms on/180 ms off, 10/90, 40/160, 20/80, 60/120, 20/40). Although in both experiments right hand advantages were found, manual asymmetries did not interact with intermittent vision conditions. Similar patterns of results were found across vision conditions for both hands. These data do not support the visual feedback processing hypothesis of manual asymmetry. Motor performance is affected to the same extent for both hand systems when vision is degraded.

  9. Minor Characters in William Shakespeare's Twelfth Night and A Midsummer Night's Dream

    Directory of Open Access Journals (Sweden)

    Zahraa Adnan Baqer

    2018-01-01

    Full Text Available This paper aims at discussing the role of the minor characters in William Shakespeare's Twelfth Night and A Midsummer Night's Dream. The study assumes that without the first group of minor characters, associated with Olivia, the play Twelfth Night would lose much of its humor, and without the second group, associated with Sebastian, the play would fall apart. On the other hand, in Shakespeare's A Midsummer Night's Dream minor characters play important roles, without them, the action dose not ran smoothly, or does not ran at all. The paper falls into three sections. Section one deals with the role of each minor character in Twelfth Night.  Section two focuses on the minor characters in A Midsummer Night's Dream. Section three is a conclusion which sums up the findings of the study.

  10. Single night postoperative prone posturing in idiopathic macular hole surgery.

    LENUS (Irish Health Repository)

    2012-02-01

    Purpose. To evaluate the role of postoperative prone posturing for a single night in the outcome of trans pars plana vitrectomy (TPPV) with internal limiting membrane (ILM) peel and 20% perfluoroethane (C2F6) internal tamponade for idiopathic macular hole. Methods. This prospective trial enrolled 14 eyes in 14 consecutive patients with idiopathic macular hole. All eyes underwent TPPV with vision blue assisted ILM peeling with and without phacoemulsification and intraocular lens (IOL) for macular hole. Intraocular gas tamponade (20% C2F6) was used in all cases with postoperative face-down posturing overnight and without specific posturing afterwards. LogMAR visual acuity, appearance by slit-lamp biomicroscopy, and ocular coherence tomography (OCT) scans were compared preoperatively and postoperatively to assess outcome. Results. Among 14 eyes recruited, all eyes were phakic; 50% of patients underwent concurrent phacoemulsification with IOL. The macular holes were categorized preoperatively by OCT appearance, 4 (28.57%) were stage 2, 7 (50%) were stage 3, and 3 (21.43%) were stage 4. Mean macular hole size was 0.35 disk diameters. Symptoms of macular hole had been present for an average of 6.5 months. All holes (100%) were closed 3 and 6 months postoperatively. Mean visual acuity (logMAR) was improved to 0.61 at 3 months and was stable at 6 months after the surgery. None of the eyes had worse vision postoperatively. Conclusions. Vitrectomy with ILM peeling and 20% C2F6 gas with a brief postoperative 1 night prone posturing regimen is a reasonable approach to achieve anatomic closure in idiopathic macular hole. Concurrent cataract extraction did not alter outcomes and was not associated with any additional complications.

  11. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  12. Integration and coordination in a cognitive vision system

    OpenAIRE

    Wrede, Sebastian; Hanheide, Marc; Wachsmuth, Sven; Sagerer, Gerhard

    2006-01-01

    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information tha...

  13. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  14. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip [University of Florida, Gainesville, FL 32611 (United States)

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  15. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    International Nuclear Information System (INIS)

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  16. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  17. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  18. Adnyamathanha Night Skies

    Science.gov (United States)

    Curnow, Paul

    2009-06-01

    Aboriginal Australians have been viewing the night skies of Australia for some 45,000 years and possibly much longer. During this time they have been able to develop a complex knowledge of the night sky, the terrestrial environment in addition to seasonal changes. However, few of us in contemporary society have an in-depth knowledge of the nightly waltz of stars above.

  19. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  20. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  1. Global Night-Time Lights for Observing Human Activity

    Science.gov (United States)

    Hipskind, Stephen R.; Elvidge, Chris; Gurney, K.; Imhoff, Mark; Bounoua, Lahouari; Sheffner, Edwin; Nemani, Ramakrishna R.; Pettit, Donald R.; Fischer, Marc

    2011-01-01

    We present a concept for a small satellite mission to make systematic, global observations of night-time lights with spatial resolution suitable for discerning the extent, type and density of human settlements. The observations will also allow better understanding of fine scale fossil fuel CO2 emission distribution. The NASA Earth Science Decadal Survey recommends more focus on direct observations of human influence on the Earth system. The most dramatic and compelling observations of human presence on the Earth are the night light observations taken by the Defence Meteorological System Program (DMSP) Operational Linescan System (OLS). Beyond delineating the footprint of human presence, night light data, when assembled and evaluated with complementary data sets, can determine the fine scale spatial distribution of global fossil fuel CO2 emissions. Understanding fossil fuel carbon emissions is critical to understanding the entire carbon cycle, and especially the carbon exchange between terrestrial and oceanic systems.

  2. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    Science.gov (United States)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  3. Design and Modelling of Water Chilling Production System by the Combined Effects of Evaporation and Night Sky Radiation

    Directory of Open Access Journals (Sweden)

    Ahmed Y. Taha Al-Zubaydi

    2014-01-01

    Full Text Available The design and mathematical modelling of thermal radiator panel to be used primarily to measure night sky radiation wet coated surface is presented in this paper. The panel consists of an upper dry surface coated aluminium sheet laminated to an ethylene vinyl acetate foam backing block as an insulation. Water is sprayed onto the surface of the panel so that an evaporative cooling effect is gained in addition to the radiation effect; the surface of a panel then is wetted in order to study and measure the night sky radiation from the panel wet surface. In this case, the measuring water is circulated over the upper face of this panel during night time. Initial TRNSYS simulations for the performance of the system are presented and it is planned to use the panel as calibrated instruments for discriminating between the cooling effects of night sky radiation and evaporation.

  4. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  5. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  6. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  7. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  8. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  9. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    International Nuclear Information System (INIS)

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  10. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  11. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  12. A vision fusion treatment system based on ATtiny26L

    Science.gov (United States)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  13. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  14. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  15. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  16. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  17. History of the Night

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The definition of the night, as the period between sunset and sunrise, is consistent and unalterable, regardless of culture and time. However the perception of the night and its economic, social, and cultural roles are subject to change. Which parameters determine these changes? What can we learn by studying them about the specific character of a culture? Why do people experience the night in different ways in different historical periods and how did this affect their lives? How do references to nocturnal activities in historical sources (works of art, narratives) reveal what the artists/authors wish to communicate to their audiences? Can the night be a meaningful subject of historical and archaeological enquiry? A study of the source material in the Greek world (ca. 400 BC-ca. AD 400) shows a continuous effort to colonize the night with activities of the day, to make the night safer, more productive, more rational, more efficient. The main motors for this change were social developments and religion, no...

  18. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  19. Present and future of vision systems technologies in commercial flight operations

    Science.gov (United States)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  20. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  1. Novel compact panomorph lens based vision system for monitoring around a vehicle

    Science.gov (United States)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  2. Portable electronic vision enhancement systems in comparison with optical magnifiers for near vision activities: an economic evaluation alongside a randomized crossover trial.

    Science.gov (United States)

    Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T

    2017-08-01

    To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.

  3. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  4. Vision system for diagnostic task | Merad | Global Journal of Pure ...

    African Journals Online (AJOL)

    Due to environment degraded conditions, direct measurements are not possible. ... Degraded conditions: vibrations, water and chip of metal projections, ... Before tooling, the vision system has to answer: “is it the right piece at the right place?

  5. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  6. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  7. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final...Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire , known as T2-CAM for Tire -Terrain CAMera. The T2-CAM system

  8. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  9. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  10. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  11. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  12. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4...

  13. New energy vision of the Noogata city area; 2001 nendo Noogata shi chiiki shin energy vision

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-02-01

    For the purpose of promoting the introduction of new energy and enhancing the awareness of it in Noogata City, Fukuoka Prefecture, an investigational study was conducted of the energy demand amount of the city, existence amount of new energy, project for new energy introduction, etc., and a vision was worked out. The energy consumption amount of the city was estimated at 4,825.4 x 10{sup 6} MJ/y. It consisted of 47.1% in the industrial sector, 26.1% in the commercial/residential sector and 24.9% in the transportation sector. The rate of energy source was 65.7% of petroleum-base energy and 25.1% of electric power. As the project for new energy introduction, the following were studied: introduction of wind power generation/photovoltaic power generation to the flower park at the foot of Mt. Fukuchi and Nakanoshima park on the sandbank of the Onga river; introduction of photovoltaic power generation to library. Moreover, as future models of the introduction, the potential study was made on the following: installation of the stockbreeding waste biogas plant at the compost center; installation of fuel cell system using digestion gas from night soil treatment facilities; installation of the natural gas cogeneration system in the project on redevelopment of the urban area, etc. (NEDO)

  14. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  15. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  16. Biofeedback for Better Vision

    Science.gov (United States)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  17. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.

    Science.gov (United States)

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2015-07-01

    Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical

  18. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  19. Vision - night blindness

    Science.gov (United States)

    ... walking through a dark room, such as a movie theater. These problems are often worse just after ... Lippincott Williams & Wilkins; 2013:vol 3, chap 2. Review Date 8/20/2016 Updated by: Franklin W. ...

  20. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  1. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  2. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Bjorholm; Jensen, Kirsten

    2015-01-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance...... are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods...... accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. (C) 2014 Elsevier Ltd. All rights reserved....

  3. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  4. Development of VIPER: a simulator for assessing vision performance of warfighters

    Science.gov (United States)

    Familoni, Jide; Thompson, Roger; Moyer, Steve; Mueller, Gregory; Williams, Tim; Nguyen, Hung-Quang; Espinola, Richard L.; Sia, Rose K.; Ryan, Denise S.; Rivers, Bruce A.

    2016-05-01

    Background: When evaluating vision, it is important to assess not just the ability to read letters on a vision chart, but also how well one sees in real life scenarios. As part of the Warfighter Refractive Eye Surgery Program (WRESP), visual outcomes are assessed before and after refractive surgery. A Warfighter's ability to read signs and detect and identify objects is crucial, not only when deployed in a military setting, but also in their civilian lives. Objective: VIPER, a VIsion PERformance simulator was envisioned as actual video-based simulated driving to test warfighters' functional vision under realistic conditions. Designed to use interactive video image controlled environments at daytime, dusk, night, and with thermal imaging vision, it simulates the experience of viewing and identifying road signs and other objects while driving. We hypothesize that VIPER will facilitate efficient and quantifiable assessment of changes in vision and measurement of functional military performance. Study Design: Video images were recorded on an isolated 1.1 mile stretch of road with separate target sets of six simulated road signs and six objects of military interest, separately. The video footage were integrated with customdesigned C++ based software that presented the simulated drive to an observer on a computer monitor at 10, 20 or 30 miles/hour. VIPER permits the observer to indicate when a target is seen and when it is identified. Distances at which the observer recognizes and identifies targets are automatically logged. Errors in recognition and identification are also recorded. This first report describes VIPER's development and a preliminary study to establish a baseline for its performance. In the study, nine soldiers viewed simulations at 10 miles/hour and 30 miles/hour, run in randomized order for each participant seated at 36 inches from the monitor. Relevance: Ultimately, patients are interested in how their vision will affect their ability to perform daily

  5. The Circadian Timing System: Making Sense of day/night gene expression

    Directory of Open Access Journals (Sweden)

    HANS G RICHTER

    2004-01-01

    Full Text Available The circadian time-keeping system ensures predictive adaptation of individuals to the reproducible 24-h day/night alternations of our planet by generating the 24-h (circadian rhythms found in hormone release and cardiovascular, biophysical and behavioral functions, and others. In mammals, the master clock resides in the suprachiasmatic nucleus (SCN of the hypothalamus. The molecular events determining the functional oscillation of the SCN neurons with a period of 24-h involve recurrent expression of several clock proteins that interact in complex transcription/translation feedback loops. In mammals, a glutamatergic monosynaptic pathway originating from the retina regulates the clock gene expression pattern in the SCN neurons, synchronizing them to the light:dark cycle. The emerging concept is that neural/humoral output signals from the SCN impinge upon peripheral clocks located in other areas of the brain, heart, lung, gastrointestinal tract, liver, kidney, fibroblasts, and most of the cell phenotypes, resulting in overt circadian rhythms in integrated physiological functions. Here we review the impact of day/night alternation on integrated physiology; the molecular mechanisms and input/output signaling pathways involved in SCN circadian function; the current concept of peripheral clocks; and the potential role of melatonin as a circadian neuroendocrine transducer

  6. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Science.gov (United States)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  7. Computer vision in roadway transportation systems: a survey

    Science.gov (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  8. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  9. SailSpy: a vision system for yacht sail shape measurement

    Science.gov (United States)

    Olsson, Olof J.; Power, P. Wayne; Bowman, Chris C.; Palmer, G. Terry; Clist, Roger S.

    1992-11-01

    SailSpy is a real-time vision system which we have developed for automatically measuring sail shapes and masthead rotation on racing yachts. Versions have been used by the New Zealand team in two America's Cup challenges in 1988 and 1992. SailSpy uses four miniature video cameras mounted at the top of the mast to provide views of the headsail and mainsail on either tack. The cameras are connected to the SailSpy computer below deck using lightweight cables mounted inside the mast. Images received from the cameras are automatically analyzed by the SailSpy computer, and sail shape and mast rotation parameters are calculated. The sail shape parameters are calculated by recognizing sail markers (ellipses) that have been attached to the sails, and the mast rotation parameters by recognizing deck markers painted on the deck. This paper describes the SailSpy system and some of the vision algorithms used.

  10. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev [University of Florida, Gainesville, FL, 32606 (United States)

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for the purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and

  11. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-01-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for the purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and

  12. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition...

  13. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  14. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  15. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  16. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  17. Modeling foveal vision

    NARCIS (Netherlands)

    Florack, L.M.J.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    geometric model is proposed for an artificial foveal vision system, and its plausibility in the context of biological vision is explored. The model is based on an isotropic, scale invariant two-form that describes the spatial layout of receptive fields in the the visual sensorium (in the biological

  18. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  19. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  20. "Chrono-functional milk": The difference between melatonin concentrations in night-milk versus day-milk under different night illumination conditions.

    Science.gov (United States)

    Asher, A; Shabtay, A; Brosh, A; Eitam, H; Agmon, R; Cohen-Zinder, M; Zubidat, A E; Haim, A

    2015-01-01

    Pineal melatonin (MLT) is produced at highest levels during the night, under dark conditions. We evaluated differences in MLT-concentration by comparing daytime versus night time milk samples, from two dairy farms with different night illumination conditions: (1) natural dark (Dark-Night); (2) short wavelength Artificial Light at Night (ALAN, Night-Illuminated). Samples were collected from 14 Israeli Holstein cows from each commercial dairy farm at 04:30 h ("Night-milk") 12:30 h ("Day-milk") and analyzed for MLT-concentration. In order to study the effects of night illumination conditions on cows circadian rhythms, Heart Rate (HR) daily rhythms were recorded. MLT-concentrations of Night-milk samples from the dark-night group were significantly (p Night-illuminated conditions (30.70 ± 1.79 and 17.81 ± 0.33 pg/ml, respectively). Interestingly, night illumination conditions also affected melatonin concentrations at daytime where under Dark-Night conditions values are significantly (p Night-Illuminated conditions, (5.36 ± 0.33 and 3.30 ± 0.18 pg/ml, respectively). There were no significant differences between the two treatments in the milk yield and milk composition except somatic cell count (SCC), which was significantly lower (p = 0.02) in the Dark-Night group compared with the Night-Illuminated group. Cows in both groups presented a significant (p night illuminated cows feeding and milking time are the "time keeper", while in the Dark-night cows, HR rhythms were entrained by the light/dark cycle. The higher MLT-concentration in Dark-night cows with the lower SCC values calls upon farmers to avoid exposure of cows to ALAN. Therefore, under Dark-night conditions milk quality will improve by lowering SCC values where separation between night and day of such milk can produce chrono-functional milk, naturally rich with MLT.

  1. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  2. Night Terrors in Children

    OpenAIRE

    Feferman, Irv

    1988-01-01

    Night terrors are a bizarre sleep disorder that affects young children. The child partially awakes during the night agitated, afraid and terrified, and cannot be consoled. These events, which may be related to emotional turmoil, are self-limiting. Psychiatric evaluation is indicated in certain cases, and drug therapy is almost never necessary. Parents should be reassured that night terrors are not dangerous and do not reflect any serious pathology.

  3. Interoperability Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.; Narang, David; Martin, Maurice; Nordman, Bruce; Khandekar, Aditya; Hardy, Keith S.

    2018-02-28

    The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levels to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.

  4. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  5. How do different definitions of night shift affect the exposure assessment of night work?

    DEFF Research Database (Denmark)

    Garde, Anne Helene; Hansen, Johnni; Kolstad, Henrik A

    2016-01-01

    the reference definition (at least 3 h of work between 24:00 and 05:00) and definitions using a period during the night. The overlap with definitions based on starting and ending time was less pronounced (64-71 %). The proportion of classified night shifts differs little when night shifts are based...

  6. Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor

    Directory of Open Access Journals (Sweden)

    D. Bauer

    2007-01-01

    Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μs, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.

  7. Preliminary evidence for a change in spectral sensitivity of the circadian system at night

    Directory of Open Access Journals (Sweden)

    Parsons Robert H

    2005-12-01

    Full Text Available Abstract Background It is well established that the absolute sensitivity of the suprachiasmatic nucleus to photic stimulation received through the retino-hypothalamic tract changes throughout the 24-hour day. It is also believed that a combination of classical photoreceptors (rods and cones and melanopsin-containing retinal ganglion cells participate in circadian phototransduction, with a spectral sensitivity peaking between 440 and 500 nm. It is still unknown, however, whether the spectral sensitivity of the circadian system also changes throughout the solar day. Reported here is a new study that was designed to determine whether the spectral sensitivity of the circadian retinal phototransduction mechanism, measured through melatonin suppression and iris constriction, varies at night. Methods Human adult males were exposed to a high-pressure mercury lamp [450 lux (170 μW/cm2 at the cornea] and an array of blue light emitting diodes [18 lux (29 μW/cm2 at the cornea] during two nighttime experimental sessions. Both melatonin suppression and iris constriction were measured during and after a one-hour light exposure just after midnight and just before dawn. Results An increase in the percentage of melatonin suppression and an increase in pupil constriction for the mercury source relative to the blue light source at night were found, suggesting a temporal change in the contribution of photoreceptor mechanisms leading to melatonin suppression and, possibly, iris constriction by light in humans. Conclusion The preliminary data presented here suggest a change in the spectral sensitivity of circadian phototransduction mechanisms at two different times of the night. These findings are hypothesized to be the result of a change in the sensitivity of the melanopsin-expressing retinal ganglion cells to light during the night.

  8. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  9. High dynamic range vision sensor for automotive applications

    Science.gov (United States)

    Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois

    2005-02-01

    A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.

  10. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort.

    Science.gov (United States)

    Sun, Miaomiao; Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P C; Tse, Gary; Wong, Martin C S; Tang, Jin-Ling; Wong, Samuel Y S; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants' demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97-1.41) and 1.27 (95% CI, 0.74-2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01-1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40-11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19-9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13-2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94-1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies.

  11. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort

    Science.gov (United States)

    Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P. C.; Tse, Gary; Wong, Martin C. S.; Tang, Jin-ling; Wong, Samuel Y. S.; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    Aims This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. Methods A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants’ demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Results Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97–1.41) and 1.27 (95% CI, 0.74–2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01–1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40–11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19–9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13–2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94–1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Conclusion Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies. PMID:29763461

  12. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  13. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2013-01-01

    Full Text Available With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  14. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  15. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  16. The Light Plane Calibration Method of the Laser Welding Vision Monitoring System

    Science.gov (United States)

    Wang, B. G.; Wu, M. H.; Jia, W. P.

    2018-03-01

    According to the aerospace and automobile industry, the sheet steels are the very important parts. In the recent years, laser welding technique had been used to weld the sheet steel part. The seam width between the two parts is usually less than 0.1mm. Because the error of the fixture fixed can’t be eliminated, the welding parts quality can be greatly affected. In order to improve the welding quality, the line structured light is employed in the vision monitoring system to plan the welding path before welding. In order to improve the weld precision, the vision system is located on Z axis of the computer numerical control (CNC) tool. The planar pattern is placed on the X-Y plane of the CNC tool, and the structured light is projected on the planar pattern. The vision system stay at three different positions along the Z axis of the CNC tool, and the camera shoot the image of the planar pattern at every position. Using the calculated the sub-pixel center line of the structure light, the world coordinate of the center light line can be calculated. Thus, the structured light plane can be calculated by fitting the structured light line. Experiment result shows the effective of the proposed method.

  17. Intelligent Machine Vision for Automated Fence Intruder Detection Using Self-organizing Map

    Directory of Open Access Journals (Sweden)

    Veldin A. Talorete Jr.

    2017-03-01

    Full Text Available This paper presents an intelligent machine vision for automated fence intruder detection. A series of still captured images that contain fence events using Internet Protocol cameras was used as input data to the system. Two classifiers were used; the first is to classify human posture and the second one will classify intruder location. The system classifiers were implemented using Self-Organizing Map after the implementation of several image segmentation processes. The human posture classifier is in charge of classifying the detected subject’s posture patterns from subject’s silhouette. Moreover, the Intruder Localization Classifier is in charge of classifying the detected pattern’s location classifier will estimate the location of the intruder with respect to the fence using geometric feature from images as inputs. The system is capable of activating the alarm, display the actual image and depict the location of the intruder when an intruder is detected. In detecting intruder posture, the system’s success rate of 88%. Overall system accuracy for day-time intruder localization is 83% and an accuracy of 88% for night-time intruder localization

  18. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  19. Night Eating Disorders

    Directory of Open Access Journals (Sweden)

    Deniz Tuncel

    2009-08-01

    Full Text Available Hunger is an awakening related biological impulse. The relationship between hunger and sleep is moderated by the control of homeostatic and circadian rhytms of the body. Abnormal eating behavior during sleep period could result from different causes. Abnormal eating during the main sleep period has been categorized as either night eating syndrome or sleep related eating disorder. Night eating syndrome (NES is an eating disorder characterised by the clinical features of morning anorexia, evening hyperphagia, and insomnia with awakenings followed by nocturnal food ingestion. Recently night eating syndrome, conceptualized as a delayed circadian intake of food. Sleep-related eating disorder, thought to represent a parasomnia and as such included within the revised International Classification of Sleep Disorders (ICSD-2, and characterized by nocturnal partial arousals associated with recurrent episodes of involuntary food consumption and altered levels of consciousness. Whether, however, sleep-related eating disorder and night eating syndrome represent different diseases or are part of a continuum is still debated. This review summarizes their characteristics, treatment outcomes and differences between them.

  20. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  1. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  2. System for synthetic vision and augmented reality in future flight decks

    Science.gov (United States)

    Behringer, Reinhold; Tam, Clement K.; McGee, Joshua H.; Sundareswaran, Venkataraman; Vassiliou, Marius S.

    2000-06-01

    Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.

  3. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  4. A Midsummer Night's Science

    CERN Multimedia

    2001-01-01

    Last year, the first Science Night attracted nearly 1500 people. Dipping into history for the space of one night? This is the idea of Geneva's Museum of the History of Science, which is organizing its second Science Night, on 7 and 8 July, on the history of science. The first such event, held last year, was a considerable success with almost 15 000 visitors. The second Science Night, to be held in the magnificent setting of the Perle du Lac Park in Geneva, promises to be a winner too. By making science retell its own history, this major event is intended to show how every scientific and technical breakthrough is the culmination of a long period of growth that began hundreds of years in the past. Dozens of activities and events are included in this programme of time travel: visitors can study the night sky through telescopes and see what Galileo first observed, and then go to see a play on the life of the Italian scientist. Another play, commissioned specially for the occasion, will honour Geneva botanist De ...

  5. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  6. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    Directory of Open Access Journals (Sweden)

    Asraf Ali

    2012-08-01

    Full Text Available Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders.

  7. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  8. System of error detection in the manufacture of garments using artificial vision

    Science.gov (United States)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  9. Machine vision system for automated detection of stained pistachio nuts

    Science.gov (United States)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  10. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  11. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  12. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  13. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  14. Multiple episodes of convergence in genes of the dim light vision pathway in bats.

    Directory of Open Access Journals (Sweden)

    Yong-Yi Shen

    Full Text Available The molecular basis of the evolution of phenotypic characters is very complex and is poorly understood with few examples documenting the roles of multiple genes. Considering that a single gene cannot fully explain the convergence of phenotypic characters, we choose to study the convergent evolution of rod vision in two divergent bats from a network perspective. The Old World fruit bats (Pteropodidae are non-echolocating and have binocular vision, whereas the sheath-tailed bats (Emballonuridae are echolocating and have monocular vision; however, they both have relatively large eyes and rely more on rod vision to find food and navigate in the night. We found that the genes CRX, which plays an essential role in the differentiation of photoreceptor cells, SAG, which is involved in the desensitization of the photoactivated transduction cascade, and the photoreceptor gene RH, which is directly responsible for the perception of dim light, have undergone parallel sequence evolution in two divergent lineages of bats with larger eyes (Pteropodidae and Emballonuroidea. The multiple convergent events in the network of genes essential for rod vision is a rare phenomenon that illustrates the importance of investigating pathways and networks in the evolution of the molecular basis of phenotypic convergence.

  15. PePSS - A portable sky scanner for measuring extremely low night-sky brightness

    Science.gov (United States)

    Kocifaj, Miroslav; Kómar, Ladislav; Kundracik, František

    2018-05-01

    A new portable sky scanner designed for low-light-level detection at night is developed and employed in night sky brightness measurements in a rural region. The fast readout, adjustable sensitivity and linear response guaranteed in 5-6 orders of magnitude makes the device well suited for narrow-band photometry in both dark areas and bright urban and suburban environments. Quasi-monochromatic night-sky brightness data are advantageous in the accurate characterization of spectral power distribution of scattered and emitted light and, also allows for the possibility to retrieve light output patterns from whole-city light sources. The sky scanner can operate in both night and day regimes, taking advantage of the complementarity of both radiance data types. Due to its inherent very high sensitivity the photomultiplier tube could be used in night sky radiometry, while the spectrometer-equipped system component capable of detecting elevated intensities is used in daylight monitoring. Daylight is a source of information on atmospheric optical properties that in turn are necessary in processing night sky radiances. We believe that the sky scanner has the potential to revolutionize night-sky monitoring systems.

  16. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    Science.gov (United States)

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  17. Nightmares and Night Terrors

    Science.gov (United States)

    ... able to tell you what happened in the dream and why it was scary. Your child may have trouble going back to sleep. Your child might have the same dream again on other nights. What are night terrors? ...

  18. Vision system for measuring wagon buffers’ lateral movements

    Directory of Open Access Journals (Sweden)

    Barjaktarović Marko

    2013-01-01

    Full Text Available This paper presents a vision system designed for measuring horizontal and vertical displacements of a railway wagon body. The model comprises a commercial webcam and a cooperative target of an appropriate shape. The lateral buffer movement is determined by calculating target displacement in real time by processing the camera image in a LabVIEW platform using free OpenCV library. Laboratory experiments demonstrate an accuracy which is better than ±0.5 mm within a 50 mm measuring range.

  19. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  20. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  1. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  2. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  3. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  4. Determinants of day-night difference in blood pressure, a comparison with determinants of daytime and night-time blood pressure.

    Science.gov (United States)

    Musameh, M D; Nelson, C P; Gracey, J; Tobin, M; Tomaszewski, M; Samani, N J

    2017-01-01

    Blunted day-night difference in blood pressure (BP) is an independent cardiovascular risk factor, although there is limited information on determinants of diurnal variation in BP. We investigated determinants of day-night difference in systolic (SBP) and diastolic (DBP) BP and how these compared with determinants of daytime and night-time SBP and DBP. We analysed the association of mean daytime, mean night-time and mean day-night difference (defined as (mean daytime-mean night-time)/mean daytime) in SBP and DBP with clinical, lifestyle and biochemical parameters from 1562 adult individuals (mean age 38.6) from 509 nuclear families recruited in the GRAPHIC Study. We estimated the heritability of the various BP phenotypes. In multivariate analysis, there were significant associations of age, sex, markers of adiposity (body mass index and waist-hip ratio), plasma lipids (total and low-density lipoprotein cholesterol and triglycerides), serum uric acid, alcohol intake and current smoking status on daytime or night-time SBP and/or DBP. Of these, only age (P=4.7 × 10 -5 ), total cholesterol (P=0.002), plasma triglycerides (P=0.006) and current smoking (P=3.8 × 10 -9 ) associated with day-night difference in SBP, and age (P=0.001), plasma triglyceride (P=2.2 × 10 -5 ) and current smoking (3.8 × 10 -4 ) associated with day-night difference in DBP. 24-h, daytime and night-time SBP and DBP showed substantial heritability (ranging from 18-43%). In contrast day-night difference in SBP showed a lower heritability (13%) while heritability of day-night difference in DBP was not significant. These data suggest that specific clinical, lifestyle and biochemical factors contribute to inter-individual variation in daytime, night-time and day-night differences in SBP and DBP. Variation in day-night differences in BP is largely non-genetic.

  5. Determinants of day–night difference in blood pressure, a comparison with determinants of daytime and night-time blood pressure

    Science.gov (United States)

    Musameh, M D; Nelson, C P; Gracey, J; Tobin, M; Tomaszewski, M; Samani, N J

    2017-01-01

    Blunted day–night difference in blood pressure (BP) is an independent cardiovascular risk factor, although there is limited information on determinants of diurnal variation in BP. We investigated determinants of day–night difference in systolic (SBP) and diastolic (DBP) BP and how these compared with determinants of daytime and night-time SBP and DBP. We analysed the association of mean daytime, mean night-time and mean day–night difference (defined as (mean daytime−mean night-time)/mean daytime) in SBP and DBP with clinical, lifestyle and biochemical parameters from 1562 adult individuals (mean age 38.6) from 509 nuclear families recruited in the GRAPHIC Study. We estimated the heritability of the various BP phenotypes. In multivariate analysis, there were significant associations of age, sex, markers of adiposity (body mass index and waist–hip ratio), plasma lipids (total and low-density lipoprotein cholesterol and triglycerides), serum uric acid, alcohol intake and current smoking status on daytime or night-time SBP and/or DBP. Of these, only age (P=4.7 × 10−5), total cholesterol (P=0.002), plasma triglycerides (P=0.006) and current smoking (P=3.8 × 10−9) associated with day–night difference in SBP, and age (P=0.001), plasma triglyceride (P=2.2 × 10−5) and current smoking (3.8 × 10−4) associated with day–night difference in DBP. 24-h, daytime and night-time SBP and DBP showed substantial heritability (ranging from 18–43%). In contrast day–night difference in SBP showed a lower heritability (13%) while heritability of day–night difference in DBP was not significant. These data suggest that specific clinical, lifestyle and biochemical factors contribute to inter-individual variation in daytime, night-time and day–night differences in SBP and DBP. Variation in day–night differences in BP is largely non-genetic. PMID:26984683

  6. Nocturnal vision and landmark orientation in a tropical halictid bee.

    Science.gov (United States)

    Warrant, Eric J; Kelber, Almut; Gislén, Anna; Greiner, Birgit; Ribi, Willi; Wcislo, William T

    2004-08-10

    Some bees and wasps have evolved nocturnal behavior, presumably to exploit night-flowering plants or avoid predators. Like their day-active relatives, they have apposition compound eyes, a design usually found in diurnal insects. The insensitive optics of apposition eyes are not well suited for nocturnal vision. How well then do nocturnal bees and wasps see? What optical and neural adaptations have they evolved for nocturnal vision? We studied female tropical nocturnal sweat bees (Megalopta genalis) and discovered that they are able to learn landmarks around their nest entrance prior to nocturnal foraging trips and to use them to locate the nest upon return. The morphology and optics of the eye, and the physiological properties of the photoreceptors, have evolved to give Megalopta's eyes almost 30 times greater sensitivity to light than the eyes of diurnal worker honeybees, but this alone does not explain their nocturnal visual behavior. This implies that sensitivity is improved by a strategy of photon summation in time and in space, the latter of which requires the presence of specialized cells that laterally connect ommatidia into groups. First-order interneurons, with significantly wider lateral branching than those found in diurnal bees, have been identified in the first optic ganglion (the lamina ganglionaris) of Megalopta's optic lobe. We believe that these cells have the potential to mediate spatial summation. Despite the scarcity of photons, Megalopta is able to visually orient to landmarks at night in a dark forest understory, an ability permitted by unusually sensitive apposition eyes and neural photon summation.

  7. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    International Nuclear Information System (INIS)

    Jacobson, Jacob J.; Jeffers, Robert F.; Matthern, Gretchen E.; Piet, Steven J.; Baker, Benjamin A.; Grimm, Joseph

    2009-01-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R and D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft

  8. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    Science.gov (United States)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  9. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  10. Low-Power Smart Imagers for Vision-Enabled Sensor Networks

    CERN Document Server

    Fernández-Berni, Jorge; Rodríguez-Vázquez, Ángel

    2012-01-01

    This book presents a comprehensive, systematic approach to the development of vision system architectures that employ sensory-processing concurrency and parallel processing to meet the autonomy challenges posed by a variety of safety and surveillance applications.  Coverage includes a thorough analysis of resistive diffusion networks embedded within an image sensor array. This analysis supports a systematic approach to the design of spatial image filters and their implementation as vision chips in CMOS technology. The book also addresses system-level considerations pertaining to the embedding of these vision chips into vision-enabled wireless sensor networks.  Describes a system-level approach for designing of vision devices and  embedding them into vision-enabled, wireless sensor networks; Surveys state-of-the-art, vision-enabled WSN nodes; Includes details of specifications and challenges of vision-enabled WSNs; Explains architectures for low-energy CMOS vision chips with embedded, programmable spatial f...

  11. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  12. Computer vision as an alternative for collision detection

    OpenAIRE

    Drangsholt, Marius Aarvik

    2015-01-01

    The goal of this thesis was to implement a computer vision system on a low power platform, to see if that could be an alternative for a collision detection system. To achieve this, research into fundamentals in computer vision were performed, and both hardware and software implementation were carried out. To create the computer vision system, a stereo rig were constructed using low cost Logitech webcameras, and connected to a Raspberry Pi 2 development board. The computer vision library Op...

  13. A Ship Cargo Hold Inspection Approach Using Laser Vision Systems

    OpenAIRE

    SHEN Yang; ZHAO Ning; LIU Haiwei; MI Chao

    2013-01-01

    Our paper represents a vision system based on the laser measurement system (LMS) for bulk ship inspection. The LMS scanner with 2-axis servo system is installed on the ship loader to build the shape of the ship. Then, a group of real-time image processing algorithms are implemented to compute the shape of the cargo hold, the inclination angle of the ship and the relative position between the ship loader and the cargo hold. Based on those computed inspection data of the ship, the ship loader c...

  14. Rod phototransduction determines the trade-off of temporal integration and speed of vision in dark-adapted toads.

    Science.gov (United States)

    Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian

    2009-05-06

    Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.

  15. Visions, Scenarios and Action Plans Towards Next Generation Tanzania Power System

    Directory of Open Access Journals (Sweden)

    Alex Kyaruzi

    2012-10-01

    Full Text Available This paper presents strategic visions, scenarios and action plans for enhancing Tanzania Power Systems towards next generation Smart Power Grid. It first introduces the present Tanzanian power grid and the challenges ahead in terms of generation capacity, financial aspect, technical and non-technical losses, revenue loss, high tariff, aging infrastructure, environmental impact and the interconnection with the neighboring countries. Then, the current initiatives undertaken by the Tanzania government in response to the present challenges and the expected roles of smart grid in overcoming these challenges in the future with respect to the scenarios presented are discussed. The developed scenarios along with visions and recommended action plans towards the future Tanzanian power system can be exploited at all governmental levels to achieve public policy goals and help develop business opportunities by motivating domestic and international investments in modernizing the nation’s electric power infrastructure. In return, it should help build the green energy economy.

  16. Experimental shift work studies of permanent night, and rapidly rotating, shift systems. Pt. 1. Behaviour of various characteristics of sleep

    Energy Technology Data Exchange (ETDEWEB)

    Knauth, P.; Rutenfranz, J.; Romberg, H.P.; Decoster, F.; Kiesswetter, E. (Dortmund Univ. (Germany, F.R.). Inst. fuer Arbeitsphysiologie); Schulz, H. (Max-Planck-Institut fuer Psychiatrie, Muenchen (Germany, F.R.). Klinisches Inst.)

    1980-06-01

    In connection with experimental shift work 20 volunteers were examined while working on different rapidly or slowly rotating shift systems. Sleep was analyzed over a total of 112 days. Sleep was disturbed by children's noise or traffic noise. Sleep duration and sleep quality were particularly badly affected by noise with a high information value (children's noise). The ultradian rhythmicity of sleep did not appear to be disrupted by the change from day to night work. There were no significant differences between morning sleep and afternoon sleep after night work. In the laboratory experiments with fixed sleep durations, no separate effects on sleep quality could be established for different shift systems.

  17. Night Eating Disorders

    OpenAIRE

    Deniz Tuncel; Fatma Özlem Orhan

    2009-01-01

    Hunger is an awakening related biological impulse. The relationship between hunger and sleep is moderated by the control of homeostatic and circadian rhytms of the body. Abnormal eating behavior during sleep period could result from different causes. Abnormal eating during the main sleep period has been categorized as either night eating syndrome or sleep related eating disorder. Night eating syndrome (NES) is an eating disorder characterised by the clinical features of morning anorexia, even...

  18. Novice Nurses’ Perception of Working Night Shifts: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Mohsen Faseleh Jahromi

    2013-08-01

    Full Text Available Introduction: Nursing is always accompanied by shift working and nurses in Iran have to work night shifts in some stages of their professional life. Therefore, the present study aimed to describe the novice nurses’ perception of working night shifts. Methods: The present qualitative study was conducted on 20 novice nurses working in two university hospitals of Jahrom, Iran. The study data were collected through focus group interviews. All the interviews were recorded, transcribed, and analyzed using constant comparative analysis and qualitative content analysis. Results: The study findings revealed five major themes of value system, physical and psychological problems, social relationships, organizational problems, and appropriate opportunity. Conclusion: The study presented a deep understanding of the novice nurses’ perception of working night shifts, which can be used by the managers as a basis for organizing health and treatment systems.

  19. Night shift work and modifiable lifestyle factors.

    Science.gov (United States)

    Pepłońska, Beata; Burdelak, Weronika; Krysicka, Jolanta; Bukowska, Agnieszka; Marcinkiewicz, Andrzej; Sobala, Wojciech; Klimecka-Muszyńska, Dorota; Rybacki, Marcin

    2014-10-01

    Night shift work has been linked to some chronic diseases. Modification of lifestyle by night work may partially contribute to the development of these diseases, nevertheless, so far epidemiological evidence is limited. The aim of the study was to explore association between night shift work and lifestyle factors using data from a cross-sectional study among blue-collar workers employed in industrial plants in Łódź, Poland. The anonymous questionnaire was self-administered among 605 employees (236 women and 369 men, aged 35 or more) - 434 individuals currently working night shifts. Distribution of the selected lifestyle related factors such as smoking, alcohol drinking, physical activity, body mass index (BMI), number of main meals and the hour of the last meal was compared between current, former, and never night shift workers. Adjusted ORs or predicted means were calculated, as a measure of the associations between night shift work and lifestyle factors, with age, marital status and education included in the models as covariates. Recreational inactivity (defined here as less than one hour per week of recreational physical activity) was associated with current night shift work when compared to never night shift workers (OR = 2.43, 95% CI: 1.13-5.22) among men. Alcohol abstinence and later time of the last meal was associated with night shift work among women. Statistically significant positive relationship between night shift work duration and BMI was observed among men (p = 0.029). This study confirms previous studies reporting lower exercising among night shift workers and tendency to increase body weight. This finding provides important public health implication for the prevention of chronic diseases among night shift workers. Initiatives promoting physical activity addressed in particular to the night shift workers are recommended.

  20. Night shift work and modifiable lifestyle factors

    Directory of Open Access Journals (Sweden)

    Beata Pepłońska

    2014-10-01

    Full Text Available Objectives: Night shift work has been linked to some chronic diseases. Modification of lifestyle by night work may partially contribute to the development of these diseases, nevertheless, so far epidemiological evidence is limited. The aim of the study was to explore association between night shift work and lifestyle factors using data from a cross-sectional study among blue-collar workers employed in industrial plants in Łódź, Poland. Material and Methods: The anonymous questionnaire was self-administered among 605 employees (236 women and 369 men, aged 35 or more - 434 individuals currently wor­king night shifts. Distribution of the selected lifestyle related factors such as smoking, alcohol drinking, physical activity, body mass index (BMI, number of main meals and the hour of the last meal was compared between current, former, and never night shift workers. Adjusted ORs or predicted means were calculated, as a measure of the associations between night shift work and lifestyle factors, with age, marital status and education included in the models as covariates. Results: Recreational inactivity (defined here as less than one hour per week of recreational physical activity was associated with current night shift work when compared to never night shift workers (OR = 2.43, 95% CI: 1.13-5.22 among men. Alcohol abstinence and later time of the last meal was associated with night shift work among women. Statistically significant positive relationship between night shift work duration and BMI was observed among men (p = 0.029. Conclusions: This study confirms previous studies reporting lower exercising among night shift workers and tendency to increase body weight. This finding provides important public health implication for the prevention of chronic diseases among night shift workers. Initiatives promoting physical activity addressed in particular to the night shift workers are recommended.

  1. International Border Management Systems (IBMS) Program : visions and strategies.

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  2. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  3. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  4. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  5. Annotated Bibliography of the Army Research Institute’s Training Research Supporting the Land Warrior and Ground Soldier Systems: 1998-2009

    Science.gov (United States)

    2009-07-01

    AN/ PAQ -4C and AN-PEQ-2A) which were used in conjunction with night vision goggles (NVGs, AN/PVS-7B), the thermal weapon sight (TWS, AN/PAS-13), and...lights (AN/ PAQ -4C and AN-PEQ-2A) which were used in conjunction with night vision goggles (NVGs, AN/PVS-7B), and the thermal weapon sight (TWS, AN/PAS

  6. Demo : an embedded vision system for high frame rate visual servoing

    NARCIS (Netherlands)

    Ye, Z.; He, Y.; Pieters, R.S.; Mesman, B.; Corporaal, H.; Jonker, P.P.

    2011-01-01

    The frame rate of commercial off-the-shelf industrial cameras is breaking the threshold of 1000 frames-per-second, the sample rate required in high performance motion control systems. On the one hand, it enables computer vision as a cost-effective feedback source; On the other hand, it imposes

  7. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  8. Night and day in the VA: associations between night shift staffing, nurse workforce characteristics, and length of stay.

    Science.gov (United States)

    de Cordova, Pamela B; Phibbs, Ciaran S; Schmitt, Susan K; Stone, Patricia W

    2014-04-01

    In hospitals, nurses provide patient care around the clock, but the impact of night staff characteristics on patient outcomes is not well understood. The aim of this study was to examine the association between night nurse staffing and workforce characteristics and the length of stay (LOS) in 138 veterans affairs (VA) hospitals using panel data from 2002 through 2006. Staffing in hours per patient day was higher during the day than at night. The day nurse workforce had more educational preparation than the night workforce. Nurses' years of experience at the unit, facility, and VA level were greater at night. In multivariable analyses controlling for confounding variables, higher night staffing and a higher skill mix were associated with reduced LOS. © 2014 Wiley Periodicals, Inc.

  9. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  10. A 360 degrees evaluation of a night-float system for general surgery: a response to mandated work-hours reduction.

    Science.gov (United States)

    Goldstein, Michael J; Kim, Eugene; Widmann, Warren D; Hardy, Mark A

    2004-01-01

    New York State Code 405 and societal/political pressure have led the RRC and ACGME to mandate strict limitations on resident work hours. In an attempt to meet these limitations, we have switched from the previous Q3 call schedule to a specialized night float (NF) system, the continuity-care system (CCS). The purpose of this CCS is to maximize resident duty time spent on direct patient care, operative experience, and outpatient clinics, while reducing duty hours spent on performing routine tasks and call coverage. The implementation of the CCS is the fundamental step in the restructuring of our residency program. In addition to a change in the call system, we added physician assistants to aid in performing some service tasks. We performed a 360 degrees evaluation of this work in progress. In May 2002, the standard Q3 call system was abolished on the general surgery services at the New York Presbyterian Hospital, Columbia campus. Two dedicated teams were created to provide day and night coverage, a day continuity-care team (DCT) and a night continuity-care team (NCT). The DCTs, consisting of PGY1-5 residents, provide daily in-house coverage from 6 AM to 5 PM with no regular weekday night-call responsibilities. The DCT residents provide Friday night, Saturday, and daytime Sunday call coverage 3 to 4 days per month. The NCT, consisting of 5 PGY1-5 residents, provides nightly continuous care, 5 PM to 6 AM, Sunday through Thursday, with no other weekend call responsibilities. This system creates a schedule with less than 80 duty hours per week, on average, with one 24-hour period off a week, one complete weekend off per month, and no more than 24 hours of consecutive duty time. After 1 year of use, the system was evaluated by a 360 degrees method in which residents, residents' spouses, nurses, and faculty were surveyed using a Likert-type scale. Statistical significance was calculated using the Student t-test. Patient satisfaction was measured both by internal review of

  11. FY 2001 report on the new energy vision of Ajigasawa Town; 2001 nendo Azigasawa machi shin energy vision hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-02-01

    For the purpose of promoting the introduction of new energy and enhancing the awareness of it in Ajigasawa Town, Aomori Prefecture, an investigational study was conducted of the amount of energy demand of the town, potential introduction of new energy, etc., and a new energy vision was worked out. The population of Ajigasawa Town was 13,551 according to the results of the national census taken in 2000, which is slightly decreasing. The energy demand is broken down into 40.1% in the transportation sector, 35.4% in the industrial sector and 24.5% in the commercial/residential sector, depending on petroleum (81.8%) and electric power (13.1%). The CO2 emission amount from the above is estimated at 26,210 t-C/y in total. In the model project for new energy introduction, the following were selected: wind power generation for the filtration plant of water supply system/night soil treatment plant/funeral hall/comprehensive park/seed and seedling center; photovoltaic power generation for the trip village for youth/elementary schools; fuel cell/hybrid car for Ajigasawa town office; micro-hydroelectric power generation for nursery; natural gas cogeneration for the insurance welfare center. (NEDO)

  12. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  13. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  14. Development of machine vision system for PHWR fuel pellet inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kamalesh Kumar, B.; Reddy, K.S.; Lakshminarayana, A.; Sastry, V.S.; Ramana Rao, A.V. [Nuclear Fuel Complex, Hyderabad, Andhra Pradesh (India); Joshi, M.; Deshpande, P.; Navathe, C.P.; Jayaraj, R.N. [Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh (India)

    2008-07-01

    Nuclear Fuel Complex, a constituent of Department of Atomic Energy; India is responsible for manufacturing nuclear fuel in India . Over a million Uranium-di-oxide pellets fabricated per annum need visual inspection . In order to overcome the limitations of human based visual inspection, NFC has undertaken the development of machine vision system. The development involved designing various subsystems viz. mechanical and control subsystem for handling and rotation of fuel pellets, lighting subsystem for illumination, image acquisition system, and image processing system and integration. This paper brings out details of various subsystems and results obtained from the trials conducted. (author)

  15. Oxidative DNA damage during night shift work.

    Science.gov (United States)

    Bhatti, Parveen; Mirick, Dana K; Randolph, Timothy W; Gong, Jicheng; Buchanan, Diana Taibi; Zhang, Junfeng Jim; Davis, Scott

    2017-09-01

    We previously reported that compared with night sleep, day sleep among shift workers was associated with reduced urinary excretion of 8-hydroxydeoxyguanosine (8-OH-dG), potentially reflecting a reduced ability to repair 8-OH-dG lesions in DNA. We identified the absence of melatonin during day sleep as the likely causative factor. We now investigate whether night work is also associated with reduced urinary excretion of 8-OH-dG. For this cross-sectional study, 50 shift workers with the largest negative differences in night work versus night sleep circulating melatonin levels (measured as 6-sulfatoxymelatonin in urine) were selected from among the 223 shift workers included in our previous study. 8-OH-dG concentrations were measured in stored urine samples using high performance liquid chromatography with electrochemical detection. Mixed effects models were used to compare night work versus night sleep 8-OH-dG levels. Circulating melatonin levels during night work (mean=17.1 ng/mg creatinine/mg creatinine) were much lower than during night sleep (mean=51.7 ng/mg creatinine). In adjusted analyses, average urinary 8-OH-dG levels during the night work period were only 20% of those observed during the night sleep period (95% CI 10% to 30%; psleep, is associated with reduced repair of 8-OH-dG lesions in DNA and that the effect is likely driven by melatonin suppression occurring during night work relative to night sleep. If confirmed, future studies should evaluate melatonin supplementation as a means to restore oxidative DNA damage repair capacity among shift workers. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  17. Cold-induced bradycardia in man during sleep in Arctic winter nights

    Science.gov (United States)

    Buguet, A. G. C.

    1987-03-01

    Two young male Caucasians volunteered for a study on the effects of cold exposure during night sleep in winter in the Arctic. The 14-day experiment was divided in three consecutive periods, baseline (2 nights), cold exposure (10 night) and recovery (2 nights). Both baseline and recovery data were obtained in neutral thermal conditions in a laboratory. The subjects slept in a sleeping bag under an unheated tent during the cold exposure. Apart from polysomnographic and body temperature recordings, electrocardiograms were taken through a telemetric system for safety purposes. Heart rates were noted at 5-min intervals and averaged hourly. In both environmental conditions, heart rate decreased within the first two hours of sleep. Comparison of the data obtained during cold exposure vs. thermal neutrality revealed lower values of heart rate in the cold, while body temperatures remained within normal range. This cold-induced bradycardia supervening during night sleep is discussed in terms of the occurrence of a vagal reflex preventing central blood pressure to rise.

  18. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  19. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    Science.gov (United States)

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  1. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    Science.gov (United States)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  2. Using Scenario Visioning and Participatory System Dynamics Modeling to Investigate the Future: Lessons from Minnesota 2050

    Directory of Open Access Journals (Sweden)

    Kathryn J. Draeger

    2010-08-01

    Full Text Available Both scenario visioning and participatory system dynamics modeling emphasize the dynamic and uncontrollable nature of complex socio-ecological systems, and the significance of multiple feedback mechanisms. These two methodologies complement one another, but are rarely used together. We partnered with regional organizations in Minnesota to design a future visioning process that incorporated both scenarios and participatory system dynamics modeling. The three purposes of this exercise were: first, to assist regional leaders in making strategic decisions that would make their communities sustainable; second, to identify research gaps that could impede the ability of regional and state groups to plan for the future; and finally, to introduce more systems thinking into planning and policy-making around environmental issues. We found that scenarios and modeling complemented one another, and that both techniques allowed regional groups to focus on the sustainability of fundamental support systems (energy, food, and water supply. The process introduced some creative tensions between imaginative scenario visioning and quantitative system dynamics modeling, and between creating desired futures (a strong cultural norm and inhabiting the future (a premise of the Minnesota 2050 exercise. We suggest that these tensions can stimulate more agile, strategic thinking about the future.

  3. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  4. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  5. Night eating among veterans with obesity.

    Science.gov (United States)

    Dorflinger, Lindsey M; Ruser, Christopher B; Masheb, Robin M

    2017-10-01

    The obesity rate is higher among veterans than the general population, yet few studies have examined their eating behaviors, and none have examined the presence of night eating and related comorbidities. This study examines night eating syndrome (NES) among veterans seeking weight management treatment, and relationships between NES and weight, insomnia, disordered eating, and psychological variables. The sample consisted of 110 veterans referred to a weight management program at VA Connecticut Healthcare System. More than one out of ten veterans screened positive for NES, and one-third screened positive for insomnia. Most individuals screening positive for NES also screened positive for insomnia. Night eating was associated with higher BMI, and with higher scores on measures of binge eating, emotional overeating, and eating disorder symptomatology. Veterans screening positive for NES were also significantly more likely to screen positive for depression and PTSD. When controlling for insomnia, only the relationships between night eating and binge and emotional eating remained significant. Those screening positive for PTSD were more likely to endorse needing to eat to return to sleep. Findings suggest that both NES and insomnia are common among veterans seeking weight management services, and that NES is a marker for additional disordered eating behavior, specifically binge eating and overeating in response to emotions. Additional studies are needed to further delineate the relationships among NES, insomnia, and psychological variables, as well as to examine whether specifically addressing NES within behavioral weight management interventions can improve weight outcomes and problematic eating behaviors. Published by Elsevier Ltd.

  6. Disruption of Circadian Rhythms by Light During Day and Night.

    Science.gov (United States)

    Figueiro, Mariana G

    2017-06-01

    This study aims to discuss possible reasons why research to date has not forged direct links between light at night, acute melatonin suppression or circadian disruption, and risks for disease. Data suggest that irregular light-dark patterns or light exposures at the wrong circadian time can lead to circadian disruption and disease risks. However, there remains an urgent need to: (1) specify light stimulus in terms of circadian rather than visual response; (2) when translating research from animals to humans, consider species-specific spectral and absolute sensitivities to light; (3) relate the characteristics of photometric measurement of light at night to the operational characteristics of the circadian system; and (4) examine how humans may be experiencing too little daytime light, not just too much light at night. To understand the health effects of light-induced circadian disruption, we need to measure and control light stimulus during the day and at night.

  7. A low-cost machine vision system for the recognition and sorting of small parts

    Science.gov (United States)

    Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.

    2018-04-01

    An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.

  8. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  9. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  10. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  11. Artificial Vision, New Visual Modalities and Neuroadaptation

    Directory of Open Access Journals (Sweden)

    Hilmi Or

    2012-01-01

    Full Text Available To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known system includes Gabor filter and Gabor patch which work on edge perception, describing the visual perception in the best known way. These systems are used today in industry and technology of machines, robots and computers to provide their "seeing". These definitions are used beyond the machinery in humans for neuroadaptation in new visual modalities after some eye surgeries or to improve the quality of some already known visual modalities. Beside this, “the blindsight” -which was not known to exist until 35 years ago - can be stimulated with visual exercises. Gabor system is a description of visual perception definable in machine vision as well as in human visual perception. This system is used today in robotic vision. There are new visual modalities which arise after some eye surgeries or with the use of some visual optical devices. Also, blindsight is a different visual modality starting to be defined even though the exact etiology is not known. In all the new visual modalities, new vision stimulating therapies using the Gabor systems can be applied. (Turk J Oph thal mol 2012; 42: 61-5

  12. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    International Nuclear Information System (INIS)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik

    2016-01-01

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages

  13. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages.

  14. Low Vision

    Science.gov (United States)

    ... USAJobs Home » Statistics and Data » Low Vision Listen Low Vision Low Vision Defined: Low Vision is defined as the best- ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  15. Assessment of capabilities of lidar systems in day-and night-time under different atmospheric and internal-noise conditions

    Science.gov (United States)

    Agishev, Ravil; Comerón, Adolfo

    2018-04-01

    As an application of the dimensionless parameterization concept proposed earlier for the characterization of lidar systems, the universal assessment of lidar capabilities in day and night conditions is considered. The dimensionless parameters encapsulate the atmospheric conditions, the lidar optical and optoelectronic characteristics, including the photodetector internal noise, and the sky background radiation. Approaches to ensure immunity of the lidar system to external background radiation are discussed.

  16. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Directory of Open Access Journals (Sweden)

    Došen Strahinja

    2010-08-01

    Full Text Available Abstract Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand. The controller, termed cognitive vision system (CVS, mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1 the user triggers the system and controls the orientation of the hand; 2 a high-level controller automatically selects the grasp type and size; and 3 an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only. Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties and autonomous decision making (i.e., selecting the grasp type and

  17. Associations between number of consecutive night shifts and impairment of neurobehavioral performance during a subsequent simulated night shift.

    Science.gov (United States)

    Magee, Michelle; Sletten, Tracey L; Ferguson, Sally A; Grunstein, Ronald R; Anderson, Clare; Kennaway, David J; Lockley, Steven W; Rajaratnam, Shantha Mw

    2016-05-01

    This study aimed to investigate sleep and circadian phase in the relationships between neurobehavioral performance and the number of consecutive shifts worked. Thirty-four shift workers [20 men, mean age 31.8 (SD 10.9) years] worked 2-7 consecutive night shifts immediately prior to a laboratory-based, simulated night shift. For 7 days prior, participants worked their usual shift sequence, and sleep was assessed with logs and actigraphy. Participants completed a 10-minute auditory psychomotor vigilance task (PVT) at the start (~21:00 hours) and end (~07:00 hours) of the simulated night shift. Mean reaction times (RT), number of lapses and RT distribution was compared between those who worked 2-3 consecutive night shifts versus those who worked 4-7 shifts. Following 4-7 shifts, night shift workers had significantly longer mean RT at the start and end of shift, compared to those who worked 2-3 shifts. The slowest and fastest 10% RT were significantly slower at the start, but not end, of shift among participants who worked 4-7 nights. Those working 4-7 nights also demonstrated a broader RT distribution at the start and end of shift and had significantly slower RT based on cumulative distribution analysis (5 (th), 25 (th), 50 (th), 75 (th)percentiles at the start of shift; 75th percentile at the end of shift). No group differences in sleep parameters were found for 7 days and 24 hours prior to the simulated night shift. A greater number of consecutive night shifts has a negative impact on neurobehavioral performance, likely due to cognitive slowing.

  18. AHP 47: A NIGHT DATE

    Directory of Open Access Journals (Sweden)

    Phun tshogs dbang rgyal ཕུན་ཚོགས་དབང་རྒྱལ།

    2017-04-01

    Full Text Available The author was born in 1993 in Ska chung (Gaqun Village, Nyin mtha' (Ningmute Township, Rma lho (Henan Mongolian Autonomous County, Rma lho (Huangnan Tibetan Autonomous Prefecture, Mtsho sngon (Qinghai Province, PR China. Night dating was popular for teenage boys some years ago. They rode horses and yaks when they went night dating. They generally rode yaks, because horses were important for their families and used for such important tasks as pursuing bandits and going to the county town for grain and supplies. An early experience with night dating is described.

  19. Vision-Based System for Human Detection and Tracking in Indoor Environment

    OpenAIRE

    Benezeth , Yannick; Emile , Bruno; Laurent , Hélène; Rosenberger , Christophe

    2010-01-01

    International audience; In this paper, we propose a vision-based system for human detection and tracking in indoor environment using a static camera. The proposed method is based on object recognition in still images combined with methods using temporal information from the video. Doing that, we improve the performance of the overall system and reduce the task complexity. We first use background subtraction to limit the search space of the classifier. The segmentation is realized by modeling ...

  20. Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Science.gov (United States)

    Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack

    2018-01-01

    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost

  1. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  2. Night work and prostate cancer in men: a Swedish prospective cohort study.

    Science.gov (United States)

    Åkerstedt, Torbjrn; Narusyte, Jurgita; Svedberg, Pia; Kecklund, Göran; Alexanderson, Kristina

    2017-06-08

    Prostate cancer is the most common cancer and the second leading cause of cancer-related deaths among men, but the contributing factors are unclear. One such may be night work because of the day/night alternation of work and the resulting disturbance of the circadian system. The purpose of the present study was to investigate the prospective relation between number of years with night work and prostate cancer in men. Cohort study comparing night and day working twins with respect to incident prostate cancer in 12 322 men. Individuals in the Swedish Twin Registry. 12 322 male twins. Prostate cancer diagnoses obtained from the Swedish Cancer Registry with a follow-up time of 12 years, with a total number of cases=454. Multiple Cox proportional hazard regression analysis, adjusted for a number of covariates, showed no association between ever night work and prostate cancer, nor for duration of night work and prostate cancer. Analysis of twin pairs discordant for prostate cancer (n=332) showed no significant association between night work and prostate cancer. The results, together with previous studies, suggest that night work does not seem to constitute a risk factor for prostate cancer. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  4. FELIN: tailored optronics and systems solutions for dismounted combat

    Science.gov (United States)

    Milcent, A. M.

    2009-05-01

    The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.

  5. Discover POPSCIENCE on Researchers' Night

    CERN Multimedia

    The POPSCIENCE Team

    2014-01-01

    On Friday 26 September 2014, CERN will be celebrating European Researchers' Night at three venues in Geneva and St. Genis-Pouilly. Inspired by Andy Warhol, this year's theme is “Pop science is for everyone”.     Every year, on the last Friday of September, the European Researchers’ Night takes place in about 300 cities all over Europe, with funding from the EU, to promote research and highlight researchers in engaging and fun ways for the general public. Andy Warhol said, “Pop art is for everyone”. This year, “Pop science is for everyone” is the motto of the Researchers’ Night event organised by CERN and its partners*. The night will offer everyone the opportunity to learn about the latest discoveries in physics and cosmology through poetry, theatre and music. This will be in addition to the event's traditional activities for the general public. To attract new audiences,...

  6. VISION: a Versatile and Innovative SIlicOn tracking system

    CERN Document Server

    Lietti, Daniela; Vallazza, Erik

    This thesis work focuses on the study of the performance of different tracking and profilometry systems (the so-called INSULAB, INSUbria LABoratory, and VISION, Versatile and Innovative SIlicON, Telescopes) used in the last years by the NTA-HCCC, the COHERENT (COHERENT effects in crystals for the physics of accelerators), ICE-RAD (Interaction in Crystals for Emission of RADiation) and CHANEL (CHAnneling of NEgative Leptons) experiments, four collaborations of the INFN (Istituto Nazionale di Fisica Nucleare) dedicated to the research in the crystals physics field.

  7. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  8. Vision restoration after brain and retina damage: the "residual vision activation theory".

    Science.gov (United States)

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  9. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  10. The use of contact lens telescopic systems in low vision rehabilitation.

    Science.gov (United States)

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  11. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    The goal of this thesis is to investigate computer vision methods for timber harvesting operations. The background for developing computer vision for timber harvesting is to document origin of timber and to collect qualitative and quantitative parameters concerning the timber for efficient harvest...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the development of the logTracker system the described methods have a general applicability making them useful for many other computer vision problems....

  12. Traffic Light Detection at Night

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Philipsen, Mark Philip; Bahnsen, Chris

    2015-01-01

    Traffic light recognition (TLR) is an integral part of any in- telligent vehicle, it must function both at day and at night. However, the majority of TLR research is focused on day-time scenarios. In this paper we will focus on detection of traffic lights at night and evalu- ate the performance...... of three detectors based on heuristic models and one learning-based detector. Evaluation is done on night-time data from the public LISA Traffic Light Dataset. The learning-based detector out- performs the model-based detectors in both precision and recall. The learning-based detector achieves an average...

  13. Disposable Multi-Sensor Unattended Ground Sensor Systems for Detecting Personnel (Systemes de detection multi-capteurs terrestres autonome destines a detecter du personnel)

    Science.gov (United States)

    2015-02-01

    the set of DCT coefficients for all the training data corresponding to the people. Then, the matrix ][ pX can be written as: ][][][ −+ −= ppp XXX ...deployed on two types of ground conditions. This included ARL multi-modal sensors, video and acoustic sensors from the Universities of Memphis and...Mississippi, SASNet from Canada, video from Night Vision Laboratory and Pearls of Wisdom system from Israel operated in conjunction with ARL personnel. This

  14. A neural network based artificial vision system for licence plate recognition.

    Science.gov (United States)

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  15. Machine Vision Tests for Spent Fuel Scrap Characteristics

    International Nuclear Information System (INIS)

    BERGER, W.W.

    2000-01-01

    The purpose of this work is to perform a feasibility test of a Machine Vision system for potential use at the Hanford K basins during spent nuclear fuel (SNF) operations. This report documents the testing performed to establish functionality of the system including quantitative assessment of results. Fauske and Associates, Inc., which has been intimately involved in development of the SNF safety basis, has teamed with Agris-Schoen Vision Systems, experts in robotics, tele-robotics, and Machine Vision, for this work

  16. Quality of life and educational benefit among orthopedic surgery residents: a prospective, multicentre comparison of the night float and the standard call systems.

    Science.gov (United States)

    Zahrai, Ali; Chahal, Jaskarndip; Stojimirovic, Dan; Schemitsch, Emil H; Yee, Albert; Kraemer, William

    2011-02-01

    Given recent evolving guidelines regarding postcall clinical relief of residents and emphasis on quality of life, novel strategies are required for implementing call schedules. The night float system has been used by some institutions as a strategy to decrease the burden of call on resident quality of life in level-1 trauma centres. The purpose of this study was to determine whether there are differences in quality of life, work-related stressors and educational experience between orthopedic surgery residents in the night float and standard call systems at 2 level-1 trauma centres. We conducted a prospective cohort study at 2 level-1 trauma hospitals comprising a standard call (1 night in 4) group and a night float (5 14-hour shifts [5 pm-7 am] from Monday to Friday) group for each hospital. Over the course of a 6-month rotation, each resident completed 3 weeks of night float. The remainder of the time on the trauma service consists of clinical duties from 6:30 am to 5:30 pm on a daily basis and intermittent coverage of weekend call only. Residents completed the Short Form-36 (SF-36) general quality-of-life questionnaire, as well as questionnaires on stress level and educational experience before the rotation (baseline) and at 2, 4 and 6 months. We performed an analysis of covariance to compare between-group differences using the baseline scores as covariates and Wilcoxon signed-rank tests (nonparametric) to determine if the residents' SF-36 scores were different from the age- and sex-matched Canadian norms. We analyzed predictors of resident quality of life using multivariable mixed models. Seven residents were in the standard call group and 9 in the night float group, for a total of 16 residents (all men, mean age 35.1 yr). Controlling for between-group differences at baseline, residents on the night float rotation had significantly lower role physical, bodily pain, social function and physical component scale scores over the 6-month observation period. Compared

  17. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  18. Development of a vision-based pH reading system

    Science.gov (United States)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  19. Infrared machine vision system for the automatic detection of olive fruit quality.

    Science.gov (United States)

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  20. Comparison of the Infiniti vision and the series 20,000 Legacy systems.

    Science.gov (United States)

    Fernández de Castro, Luis E; Solomon, Kerry D; Hu, Daniel J; Vroman, David T; Sandoval, Helga P

    2008-01-01

    To compare the efficiency of the Infiniti vision system and the Series 20,000 Legacy system phacoemulsification units during routine cataract extraction. Thirty-nine eyes of 39 patients were randomized to have their cataract removed using either the Infiniti or the Legacy system, both using the Neosonix handpiece. System settings were standardized. Ultrasound time, amount of balanced salt solution (BSS) used intraoperatively, and postoperative visual acuity at postoperative days 1, 7 and 30 were evaluated. Preoperatively, best corrected visual acuity was significantly worse in the Infiniti group compared to the Legacy group (0.38 +/- 0.23 and 0.21 +/- 0.16, respectively; p = 0.012). The mean phacoemulsification time was 39.6 +/- 22.9 s (range 6.0-102.0) for the Legacy group and 18.3 +/-19.1 s (range 1.0-80.0) for the Infiniti group (p = 0.001). The mean amounts of intraoperative BSS used were 117 +/- 37.7 ml (range 70-195) in the Legacy group and 85.3 +/- 38.9 ml (range 40-200) in the Infiniti group (p = 0.005). No differences in postoperative visual acuity were found. The ability to use higher flow rates and vacuum settings with the Infiniti vision system allowed for cataract removal with less phacoemulsification time than when using the Legacy system. Copyright 2008 S. Karger AG, Basel.

  1. New vision solar system mission study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  2. Volunteers for Researchers’ Night wanted

    CERN Multimedia

    Katarina Anthony

    2015-01-01

    Every year, on the last Friday of September, the European Researchers’ Night (see here) takes place in about 300 cities all over Europe - promoting research in engaging and fun ways for the general public. This year, CERN will be participating once again, hosting dozens of events across the Balexert shopping centre – and we’ll need YOUR help to make the celebration a success.   From film screenings and celebrity Q&A sessions to “Ask a Researcher” and build-your-own LEGO LHC events, this year’s Researchers’ Night is going to be jam-packed! The fun will kick off prior to the night itself with a mock-up of the LHC tunnel installed in the central court of the Balexert shopping centre, 8-12 September*. CERN people will be on hand to speak to shoppers about the LHC, and to encourage them to participate in Researchers’ Night! The CERN organisers are recruiting volunteers and support staff for Researchers’ ...

  3. Vision and the hypothalamus.

    Science.gov (United States)

    Trachtman, Joseph N

    2010-02-01

    For nearly 2 millennia, signs of hypothalamic-related vision disorders have been noticed as illustrated by paintings and drawings of that time of undiagnosed Horner's syndrome. It was not until the 1800s, however, that specific connections between the hypothalamus and the vision system were discovered. With a fuller elaboration of the autonomic nervous system in the early to mid 1900s, many more pathways were discovered. The more recently discovered retinohypothalamic tracts show the extent and influence of light stimulation on hypothalamic function and bodily processes. The hypothalamus maintains its myriad connections via neural pathways, such as with the pituitary and pineal glands; the chemical messengers of the peptides, cytokines, and neurotransmitters; and the nitric oxide mechanism. As a result of these connections, the hypothalamus has involvement in many degenerative diseases. A complete feedback mechanism between the eye and hypothalamus is established by the retinohypothalamic tracts and the ciliary nerves innervating the anterior pole of the eye and the retina. A discussion of hypothalamic-related vision disorders includes neurologic syndromes, the lacrimal system, the retina, and ocular inflammation. Tables and figures have been used to aid in the explanation of the many connections and chemicals controlled by the hypothalamus. The understanding of the functions of the hypothalamus will allow the clinician to gain better insight into the many pathologies associated between the vision system and the hypothalamus. In the future, it may be possible that some ocular disease treatments will be via direct action on hypothalamic function. Copyright 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  4. Virtual expansion of the technical vision system for smart vehicles based on multi-agent cooperation model

    Science.gov (United States)

    Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay

    2017-12-01

    Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.

  5. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  6. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    Science.gov (United States)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  7. Passive ventilation systems with heat recovery and night cooling

    DEFF Research Database (Denmark)

    Hviid, Christian Anker; Svendsen, Svend

    2008-01-01

    with little energy consumption and with satisfying indoor climate. The concept is based on using passive measures like stack and wind driven ventilation, effective night cooling and low pressure loss heat recovery using two fluid coupled water-to-air heat exchangers developed at the Technical University...... simulation program ESP-r to model the heat and air flows and the results show the feasibility of the proposed ventilation concept in terms of low energy consumption and good indoor climate....

  8. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    International Nuclear Information System (INIS)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-01-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately. (paper)

  9. A future vision of nuclear material information systems

    International Nuclear Information System (INIS)

    Suski, N.; Wimple, C.

    1999-01-01

    To address the current and future needs for nuclear materials management and safeguards information, Lawrence Livermore National Laboratory envisions an integrated nuclear information system that will support several functions. The vision is to link distributed information systems via a common communications infrastructure designed to address the information interdependencies between two major elements: Domestic, with information about specific nuclear materials and their properties, and International, with information pertaining to foreign nuclear materials, facility design and operations. The communication infrastructure will enable data consistency, validation and reconciliation, as well as provide a common access point and user interface for a broad range of nuclear materials information. Information may be transmitted to, from, and within the system by a variety of linkage mechanisms, including the Internet. Strict access control will be employed as well as data encryption and user authentication to provide the necessary information assurance. The system can provide a mechanism not only for data storage and retrieval, but will eventually provide the analytical tools necessary to support the U.S. government's nuclear materials management needs and non-proliferation policy goals

  10. Diagnosis System for Diabetic Retinopathy and Glaucoma Screening to Prevent Vision Loss

    Directory of Open Access Journals (Sweden)

    Siva Sundhara Raja DHANUSHKODI

    2014-03-01

    Full Text Available Aim: Diabetic retinopathy (DR and glaucoma are two most common retinal disorders that are major causes of blindness in diabetic patients. DR caused in retinal images due to the damage in retinal blood vessels, which leads to the formation of hemorrhages spread over the entire region of retina. Glaucoma is caused due to hypertension in diabetic patients. Both DR and glaucoma affects the vision loss in diabetic patients. Hence, a computer aided development of diagnosis system for Diabetic retinopathy and Glaucoma screening is proposed in this paper to prevent vision loss. Method: The diagnosis system of DR consists of two stages namely detection and segmentation of fovea and hemorrhages. The diagnosis system of glaucoma screening consists of three stages namely blood vessel segmentation, Extraction of optic disc (OD and optic cup (OC region and determination of rim area between OD and OC. Results: The specificity and accuracy for hemorrhages detection is found to be 98.47% and 98.09% respectively. The accuracy for OD detection is found to be 99.3%. This outperforms state-of-the-art methods. Conclusion: In this paper, the diagnosis system is developed to classify the DR and glaucoma screening in to mild, moderate and severe respectively.

  11. Optical correction and quality of vision of the French soldiers stationed in the Republic of Djibouti in 2009.

    Science.gov (United States)

    Vignal, Rodolphe; Ollivier, Lénaïck

    2011-03-01

    To ensure vision readiness on the battlefield, the French military has been providing its soldiers with eyewear since World War I. A military refractive surgery program was initiated in 2008. A prospective questionnaire-based investigation on optical correction and quality of vision among active duty members with visual deficiencies stationed in Djibouti, Africa, was conducted in 2009. It revealed that 59.3% of the soldiers were wearing spectacles, 21.2% were wearing contact lenses--despite official recommendations--and 8.5% had undergone refractive surgery. Satisfaction rates were high with refractive surgery and contact lenses; 33.6% of eyeglass wearers were planning to have surgery. Eye dryness and night vision disturbances were the most reported symptoms following surgery. Military optical devices were under-prescribed before deployment. This suggests that additional and more effective studies on the use of military optical devices should be performed and policy supporting refractive surgery in military populations should be strengthened.

  12. Day-to-night heat storage in greenhouses

    NARCIS (Netherlands)

    Seginer, Ido; Straten, van Gerrit; Beveren, van Peter J.M.

    2017-01-01

    Day-to-night heat storage in water tanks (buffers) is common practice in cold-climate greenhouses, where gas is burned during the day for carbon dioxide enrichment. In Part 1 of this study, an optimal control approach was outlined for such a system, the basic idea being that the virtual value

  13. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  14. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  15. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  16. Night airglow in RGB mode

    Directory of Open Access Journals (Sweden)

    Mikhalev А.V.

    2016-09-01

    Full Text Available To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010–2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E by the camera with KODAK KAI-11002 CCD sensor. We estimate average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg·cm–2·s–1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings, we demonstrate the possibility of quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum

  17. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  18. Synthetic vision and memory for autonomous virtual humans

    OpenAIRE

    PETERS, CHRISTOPHER; O'SULLIVAN, CAROL ANN

    2002-01-01

    PUBLISHED A memory model based on ?stage theory?, an influential concept of memory from the field of cognitive psychology, is presented for application to autonomous virtual humans. The virtual human senses external stimuli through a synthetic vision system. The vision system incorporates multiple modes of vision in order to accommodate a perceptual attention approach. The memory model is used to store perceived and attended object information at different stages in a filtering...

  19. A Test-Bed of Secure Mobile Cloud Computing for Military Applications

    Science.gov (United States)

    2016-09-13

    problem studied Many military applications have the following characteristics: they start from a mobile device (e.g., a night vision goggle...Issue 8, Vol. 65, pp. 6678 - 6691, June 2016. DOI: 10.1109/TVT.2015.2472993 [3] Gartner, “Worldwide smartphone sales to end users by operating system...SECURITY CLASSIFICATION OF: Many military applications have the following characteristics: they start from a mobile device (e.g., a night vision

  20. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  1. Evaluating the image quality of Closed Circuit Television magnification systems versus a head-mounted display for people with low vision. .

    Science.gov (United States)

    Lin, Chern Sheng; Jan, Hvey-An; Lay, Yun-Long; Huang, Chih-Chia; Chen, Hsien-Tse

    2014-01-01

    In this research, image analysis was used to optimize the visual output of a traditional Closed Circuit Television (CCTV) magnifying system and a head-mounted display (HMD) for people with low vision. There were two purposes: (1) To determine the benefit of using an image analysis system to customize image quality for a person with low vision, and (2) to have people with low vision evaluate a traditional CCTV magnifier and an HMD, each customized to the user's needs and preferences. A CCTV system can electronically alter images by increasing the contrast, brightness, and magnification for the visually disabled when they are reading texts and pictures. The test methods was developed to evaluate and customize a magnification system for persons with low vision. The head-mounted display with CCTV was used to obtain better depth of field and a higher modulation transfer function from the video camera. By sensing the parameters of the environment (e.g., ambient light level, etc.) and collecting the user's specific characteristics, the system could make adjustments according to the user's needs, thus allowing the visually disabled to read more efficiently.

  2. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  3. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  4. Night driving simulation in a randomized prospective comparison of Visian toric implantable collamer lens and conventional PRK for moderate to high myopic astigmatism.

    Science.gov (United States)

    Schallhorn, Steven; Tanzer, David; Sanders, Donald R; Sanders, Monica; Brown, Mitch; Kaupp, Sandor E

    2010-05-01

    To compare changes in simulated night driving performance after Visian Toric Implantable Collamer Lens (TICL; STAAR Surgical) implantation and photorefractive keratectomy (PRK) for the correction of moderate to high myopic astigmatism. This prospective, randomized study consisted of 43 eyes implanted with the TICL (20 bilateral cases) and 45 eyes receiving conventional PRK (VISX Star S3 excimer laser) with mitomycin C (22 bilateral cases) for moderate to high myopia (-6.00 to -20.00 diopters[D] sphere) measured at the spectacle plane and 1.00 to 4.00 D of astigmatism. As a substudy, 27 eyes of 14 TICL patients and 41 eyes of 21 PRK patients underwent a simulated night driving test. The detection and identification distances of road signs and hazards with the Night Driving Simulator (Vision Sciences Research Corp) were measured with and without a glare source before and 6 months after each procedure. No significant difference was noted in the pre- to postoperative Night Driving Simulator in detection distances with and without the glare source between the TICL and PRK groups. The differences in identification distances without glare were significantly better for business and traffic road signs and pedestrian hazards in the TICL group relative to the PRK group whereas with glare, only the pedestrian hazards were significantly better. A clinically relevant change of Night Driving Simulator performance (>0.5 seconds change in ability to identify tasks postoperatively) was significantly better in the TICL group (with and without glare) for all identification tasks. The TICL performed better than conventional PRK in the pre- to postoperative Night Driving Simulator testing with and without a glare source present. Copyright 2010, SLACK Incorporated.

  5. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  6. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  7. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  8. Menstrual characteristics and night work among nurses.

    Science.gov (United States)

    Moen, Bente E; Baste, Valborg; Morken, Tone; Alsaker, Kjersti; Pallesen, Ståle; Bjorvatn, Bjørn

    2015-01-01

    Night work has been associated with adverse effects in terms of reproductive health. Specifically, menstruation has been suggested to be negatively impacted by night work, which again may influence fertility. This study investigated whether working nights is related to menstrual characteristics and if there is a relationship between shift work disorder (SWD) and menstruation. The study was cross-sectional, response rate 38%. The sample comprised female nurses who were members of the Norwegian Nurses Association; below 50 yr of age, who were not pregnant, did not use hormonal pills or intrauterine devices and who had not reached menopause (n=766). The nurses answered a postal survey including questions about night work and menstrual characteristics. Fifteen per cent reported to have irregular menstruations. Thirty-nine per cent of the nurses were classified as having SWD. Logistic regression analyses concerning the relationship between irregular menstruations and night work did not show any associations. Furthermore, no associations were found between cycle length or bleeding period and night work parameters. No associations were found between menstrual characteristics and SWD.

  9. High-speed potato grading and quality inspection based on a color vision system

    Science.gov (United States)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  10. Real-time machine vision system using FPGA and soft-core processor

    Science.gov (United States)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  11. Seeing in the dark: vision and visual behaviour in nocturnal bees and wasps.

    Science.gov (United States)

    Warrant, Eric J

    2008-06-01

    In response to the pressures of predation, parasitism and competition for limited resources, several groups of (mainly) tropical bees and wasps have independently evolved a nocturnal lifestyle. Like their day-active (diurnal) relatives, these insects possess apposition compound eyes, a relatively light-insensitive eye design that is best suited to vision in bright light. Despite this, nocturnal bees and wasps are able to forage at night, with many species capable of flying through a dark and complex forest between the nest and a foraging site, a behaviour that relies heavily on vision and is limited by light intensity. In the two best-studied species - the Central American sweat bee Megalopta genalis (Halictidae) and the Indian carpenter bee Xylocopa tranquebarica (Apidae) - learned visual landmarks are used to guide foraging and homing. Their apposition eyes, however, have only around 30 times greater optical sensitivity than the eyes of their closest diurnal relatives, a fact that is apparently inconsistent with their remarkable nocturnal visual abilities. Moreover, signals generated in the photoreceptors, even though amplified by a high transduction gain, are too noisy and slow to transmit significant amounts of information in dim light. How have nocturnal bees and wasps resolved these paradoxes? Even though this question remains to be answered conclusively, a mounting body of theoretical and experimental evidence suggests that the slow and noisy visual signals generated by the photoreceptors are spatially summed by second-order monopolar cells in the lamina, a process that could dramatically improve visual reliability for the coarser and slower features of the visual world at night.

  12. Melas Chasma, Day and Night.

    Science.gov (United States)

    2002-01-01

    This image is a mosaic of day and night infrared images of Melas Chasma taken by the camera system on NASA's Mars Odyssey spacecraft. The daytime temperature images are shown in black and white, superimposed on the martian topography. A single nighttime temperature image is superimposed in color. The daytime temperatures range from approximately -35 degrees Celsius (-31 degrees Fahrenheit) in black to -5 degrees Celsius (23 degrees Fahrenheit) in white. Overlapping landslides and individual layers in the walls of Melas Chasma can be seen in this image. The landslides flowed over 100 kilometers (62 miles) across the floor of Melas Chasma, producing deposits with ridges and grooves of alternating warm and cold materials that can still be seen. The temperature differences in the daytime images are due primarily to lighting effects, where sunlit slopes are warm (bright) and shadowed slopes are cool (dark). The nighttime temperature differences are due to differences in the abundance of rocky materials that retain their heat at night and stay relatively warm (red). Fine grained dust and sand (blue) cools off more rapidly at night. These images were acquired using the thermal infrared imaging system infrared Band 9, centered at 12.6 micrometers.Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the 2001 Mars Odyssey mission for NASA's Office of Space Science in Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson and NASA's Johnson Space Center, Houston, operate the science instruments. Additional science partners are located at the Russian Aviation and Space Agency and at Los Alamos National Laboratories, New Mexico. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL. Aviation and Space Agency and at Los Alamos National Laboratories

  13. Analysis of circadian properties and healthy levels of blue light from smartphones at night

    Science.gov (United States)

    Oh, Ji Hye; Yoo, Heeyeon; Park, Hoo Keun; Do, Young Rag

    2015-06-01

    This study proposes representative figures of merit for circadian and vision performance for healthy and efficient use of smartphone displays. The recently developed figures of merit for circadian luminous efficacy of radiation (CER) and circadian illuminance (CIL) related to human health and circadian rhythm were measured to compare three kinds of commercial smartphone displays. The CIL values for social network service (SNS) messenger screens from all three displays were higher than 41.3 biolux (blx) in a dark room at night, and the highest CIL value reached 50.9 blx. These CIL values corresponded to melatonin suppression values (MSVs) of 7.3% and 11.4%, respectively. Moreover, smartphone use in a bright room at night had much higher CIL and MSV values (58.7 ~ 105.2 blx and 15.4 ~ 36.1%, respectively). This study also analyzed the nonvisual and visual optical properties of the three smartphone displays while varying the distance between the screen and eye and controlling the brightness setting. Finally, a method to possibly attenuate the unhealthy effects of smartphone displays was proposed and investigated by decreasing the emitting wavelength of blue LEDs in a smartphone LCD backlight and subsequently reducing the circadian effect of the display.

  14. Hot Flashes amd Night Sweats (PDQ)

    Science.gov (United States)

    ... Professionals Questions to Ask about Your Treatment Research Hot Flashes and Night Sweats (PDQ®)–Patient Version Overview ... quality of life in many patients with cancer. Hot flashes and night sweats may be side effects ...

  15. ROV-based Underwater Vision System for Intelligent Fish Ethology Research

    Directory of Open Access Journals (Sweden)

    Rui Nian

    2013-09-01

    Full Text Available Fish ethology is a prospective discipline for ocean surveys. In this paper, one ROV-based system is established to perform underwater visual tasks with customized optical sensors installed. One image quality enhancement method is first presented in the context of creating underwater imaging models combined with homomorphic filtering and wavelet decomposition. The underwater vision system can further detect and track swimming fish from the resulting images with the strategies developed using curve evolution and particular filtering, in order to obtain a deeper understanding of fish behaviours. The simulation results have shown the excellent performance of the developed scheme, in regard to both robustness and effectiveness.

  16. The night sky brightness at McDonald Observatory

    Science.gov (United States)

    Kalinowski, J. K.; Roosen, R. G.; Brandt, J. C.

    1975-01-01

    Baseline observations of the night sky brightness in B and V are presented for McDonald Observatory. In agreement with earlier work by Elvey and Rudnick (1937) and Elvey (1943), significant night-to-night and same-night variations in sky brightness are found. Possible causes for these variations are discussed. The largest variation in sky brightness found during a single night is approximately a factor of two, a value which corresponds to a factor-of-four variation in airglow brightness. The data are used to comment on the accuracy of previously published surface photometry of M 81.

  17. Calibration method for a vision guiding-based laser-tracking measurement system

    International Nuclear Information System (INIS)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-01-01

    Laser-tracking measurement systems (laser trackers) based on a vision-guiding device are widely used in industrial fields, and their calibration is important. As conventional methods typically have many disadvantages, such as difficult machining of the target and overdependence on the retroreflector, a novel calibration method is presented in this paper. The retroreflector, which is necessary in the normal calibration method, is unnecessary in our approach. As the laser beam is linear, points on the beam can be obtained with the help of a normal planar target. In this way, we can determine the function of a laser beam under the camera coordinate system, while its corresponding function under the laser-tracker coordinate system can be obtained from the encoder of the laser tracker. Clearly, when several groups of functions are confirmed, the rotation matrix can be solved from the direction vectors of the laser beams in different coordinate systems. As the intersection of the laser beams is the origin of the laser-tracker coordinate system, the translation matrix can also be determined. Our proposed method not only achieves the calibration of a single laser-tracking measurement system but also provides a reference for the calibration of a multistation system. Simulations to evaluate the effects of some critical factors were conducted. These simulations show the robustness and accuracy of our method. In real experiments, the root mean square error of the calibration result reached 1.46 mm within a range of 10 m, even though the vision-guiding device focuses on a point approximately 5 m away from the origin of its coordinate system, with a field of view of approximately 200 mm  ×  200 mm. (paper)

  18. Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems

    OpenAIRE

    Kootstra, Gert; Bergström, Niklas; Kragic, Danica

    2011-01-01

    Gestalt psychology studies how the human visual system organizes the complex visual input into unitary elements. In this paper we show how the Gestalt principles for perceptual grouping and for figure-ground segregation can be used in computer vision. A number of studies will be shown that demonstrate the applicability of Gestalt principles for the prediction of human visual attention and for the automatic detection and segmentation of unknown objects by a robotic system. QC 20111115 E...

  19. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    Science.gov (United States)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  20. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  1. Effects of Shift Work on the Postural and Psychomotor Performance of Night Workers.

    Science.gov (United States)

    Narciso, Fernanda Veruska; Barela, José A; Aguiar, Stefane A; Carvalho, Adriana N S; Tufik, Sergio; de Mello, Marco Túlio

    2016-01-01

    The purpose of the study was to investigate the effects of shift work on the psychomotor and postural performance of night workers. The study included 20 polysomnography technicians working schedule of 12-h night shift by 36-h off. On the first day of protocol, the body mass and height were measured, and an actigraph was placed on the wrist of each participant. On the second day of protocol, sleepiness by Karolinska Sleepiness Scale, postural control by force platform (30 seconds) and psychomotor performance by Psychomotor Vigilance Task (10 minutes) were measured before and after 12-h night work. Results showed that after 12-h night work, sleepiness increased by 59% (pwork system and sleepiness showed a negative impact in postural and psychomotor vigilance performance of night workers. As unexpected, the force platform was feasibility to detect sleepiness in this population, underscoring the possibility of using this method in the workplace to prevent occupational injuries and accidents.

  2. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    Energy Technology Data Exchange (ETDEWEB)

    Harp, Benton; Charboneau, Stacy; Olds, Erik [US DOE (United States)

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations

  3. Night work and health status of nurses and midwives. cross-sectional study.

    Science.gov (United States)

    Burdelak, Weronika; Bukowska, Agnieszka; Krysicka, Jolanta; Pepłońska, Beata

    2012-01-01

    The aim of this study was to assess the association between night shift work and the prevalence of diseases and conditions among nurses and midwives. The study included 725 subjects (354 working on night shifts and 371 working only during the day). The data were collected via an interview based on the "Standard Shiftwork Index". We analyzed the frequency of diseases and conditions and the relative risk expressed as the odds ratio (adjusted for important confounding factors). The most common diseases in the study population were chronic back pain (47.2%), hypertension (24.5%) and thyroid diseases (21.2%). We found no statistically significant increased relative risk of any diseases and conditions among the night shift nurses, compared to the day shift ones. The duration of the work performed on night shifts was significantly associated with the relative risk of thyroid diseases--increased almost two times in the women working for 15 or more years in such system (p for trend: 0.031). The analysis showed the significantly increased (more than eight times higher) relative risk of feet swelling in the women with 8 or more night duties per month, compared to the women having fewer night shifts. We did not observe a higher frequency of diseases in the night shift nurses, compared to the day shift nurses. These results may be related to the so-called "Healthy Worker Effect". There is a need for further long-term observational studies in the populations of nurses.

  4. Measuring and mapping the night sky brightness of Perth, Western Australia

    Science.gov (United States)

    Biggs, James D.; Fouché, Tiffany; Bilki, Frank; Zadnik, Marjan G.

    2012-04-01

    In order to study the light pollution produced in the city of Perth, Western Australia, we have used a hand-held sky brightness meter to measure the night sky brightness across the city. The data acquired facilitated the creation of a contour map of night sky brightness across the 2400 km2 area of the city - the first such map to be produced for a city. Importantly, this map was created using a methodology borrowed from the field of geophysics - the well proven and rigorous techniques of geostatistical analysis and modelling. A major finding of this study is the effect of land use on night sky brightness. By overlaying the night sky brightness map on to a suitably processed Landsat satellite image of Perth we found that locations near commercial and/or light industrial areas have a brighter night sky, whereas locations used for agriculture or having high vegetation coverage have a fainter night sky than surrounding areas. Urban areas have intermediate amounts of vegetation and are intermediate in brightness compared with the above-mentioned land uses. Regions with a higher density of major highways also appear to contribute to increased night sky brightness. When corrected for the effects of direct illumination from high buildings, we found that the night sky brightness in the central business district (CBD) is very close to that expected for a city of Perth's population from modelling work and observations obtained in earlier studies. Given that our night sky brightness measurements in Perth over 2009 and 2010 are commensurate with that measured in Canadian cities over 30 years earlier implies that the various lighting systems employed in Perth (and probably most other cities) have not been optimised to minimize light pollution over that time. We also found that night sky brightness diminished with distance with an exponent of approximately -0.25 ± 0.02 from 3.5 to 10 km from the Perth CBD, a region characterized by urban and commercial land use. For distances

  5. Evaluation of a color fused dual-band NVG

    NARCIS (Netherlands)

    Hogervorst, M.A.; Toet, A.

    2009-01-01

    We designed and evaluated a dual-band Night Vision Goggles sensor system. The sensor system consists of two optically aligned NVGs fitted with filters splitting the sensitive range into a visual and a near-infrared band. The Color-the-night technique (Hogervorst & Toet, FUSION2008) was used to fuse

  6. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  7. Dark nights reverse metabolic disruption caused by dim light at night.

    Science.gov (United States)

    Fonken, L K; Weil, Z M; Nelson, R J

    2013-06-01

    The increasing prevalence of obesity and related metabolic disorders coincides with increasing exposure to light at night. Previous studies report that mice exposed to dim light at night (dLAN) develop symptoms of metabolic syndrome. This study investigated whether mice returned to dark nights after dLAN exposure recover metabolic function. Male Swiss-Webster mice were assigned to either: standard light-dark (LD) conditions for 8 weeks (LD/LD), dLAN for 8 weeks (dLAN/dLAN), LD for 4 weeks followed by 4 weeks of dLAN (LD/dLAN), and dLAN for 4 weeks followed by 4 weeks of LD (dLAN/LD). After 4 weeks in their respective lighting conditions both groups initially placed in dLAN increased body mass gain compared to LD mice. Half of the dLAN mice (dLAN/LD) were then transferred to LD and vice versa (LD/dLAN). Following the transfer dLAN/dLAN and LD/dLAN mice gained more weight than LD/LD and dLAN/LD mice. At the conclusion of the study dLAN/LD mice did not differ from LD/LD mice with respect to weight gain and had lower fat pad mass compared to dLAN/dLAN mice. Compared to all other groups dLAN/dLAN mice decreased glucose tolerance as indicated by an intraperitoneal glucose tolerance test at week 7, indicating that dLAN/LD mice recovered glucose metabolism. dLAN/dLAN mice also increased MAC1 mRNA expression in peripheral fat as compared to both LD/LD and dLAN/LD mice, suggesting peripheral inflammation is induced by dLAN, but not sustained after return to LD. These results suggest that re-exposure to dark nights ameliorates metabolic disruption caused by dLAN exposure. Copyright © 2013 The Obesity Society.

  8. Working at Night in Hospital Environment is a Risk Factor for Arterial Stiffness

    Directory of Open Access Journals (Sweden)

    Sinem Özbay

    2012-09-01

    Full Text Available Aim: Arterial stiffness is an independent risk factor for cardiovascular disease. In previous studies, emotional stress has been reported to be a risk factor for cardiovascular disease. In this study, we aimed to investigate the effects of anxiety, stress and fatigue associated with working at night in hospital environment on arterial stiffness in physicians. Methods: The study was carried out with 30 physicians employed in Medical Faculty of Uludağ University between October 2011 and March 2012. Measurements were made using Pulse Wave Sensor HDI system (Hypertension Diagnostics Inc, Eagan, MN(Set No: CR000344 by radial artery pulse wave at the onset and end of night shift. Results: The mean age of night doctors included in the study was 26 years (range: 22-38 and the female/male ratio was 2/1. It was determined that mean values of arterial stiffness were significantly higher after night shift (1330±360 dyne/sn/cm-5 compared to mean values before night shift (1093±250 dyn/s/cm-5 (p=0.01. In the evaluation of other parameters before and after night shift, no statistically significant difference was detected (p>0.05. Conclusion: The increasing arterial stiffness in hospital employees after night shift could be attributed to the effects of stress and fatigue experienced during night shift. (The Me di cal Bul le tin of Ha se ki 2012; 50: 93-5

  9. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    Science.gov (United States)

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  10. The impact of short night-time naps on performance, sleepiness and mood during a simulated night shift.

    Science.gov (United States)

    Centofanti, Stephanie A; Hilditch, Cassie J; Dorrian, Jillian; Banks, Siobhan

    2016-01-01

    Short naps on night shift are recommended in some industries. There is a paucity of evidence to verify the sustained recovery benefits of short naps in the last few hours of the night shift. Therefore, the current study aimed to investigate the sustained recovery benefits of 30 and 10-min nap opportunities during a simulated night shift. Thirty-one healthy participants (18F, 21-35 y) completed a 3-day, between-groups laboratory study with one baseline night (22:00-07:00 h time in bed), followed by one night awake (time awake from 07:00 h on day two through 10:00 h day three) with random allocation to: a 10-min nap opportunity ending at 04:00 h, a 30-min nap opportunity ending at 04:00 h or no nap (control). A neurobehavioral test bout was administered approximately every 2 h during wake periods. There were no significant differences between nap conditions for post-nap psychomotor vigilance performance after controlling for pre-nap scores (p > 0.05). The 30-min nap significantly improved subjective sleepiness compared to the 10-min nap and no-nap control (p effect.

  11. Low Vision FAQs

    Science.gov (United States)

    ... de los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is a visual impairment, not correctable ... person’s ability to perform everyday activities. What causes low vision? Low vision can result from a variety of ...

  12. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  13. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  14. Videobasierte Systeme

    Science.gov (United States)

    Knoll, Peter

    Videosensoren spielen für Fahrerassistenz systeme eine zentrale Rolle, da sie die Interpretation visueller Informationen (Objektklassifikation) gezielt unterstützen. Im Heckbereich kann die Video sensorik in der einfachsten Variante die ultraschallbasierte Einparkhilfe bei Einpark- und Rangiervorgängen unterstützen. Beim Nachtsichtsystem NightVision wird das mit Infrarotlicht angestrahlte Umfeld vor dem Fahrzeug mit einer Frontkamera aufgenommen und im Fahrzeugcockpit auf einem Display dem Fahrer angezeigt (s. Nachtsichtsysteme). Andere Fahrerassistenzsysteme verarbeiten die Videosignale und generieren daraus gezielt Informationen, die für eigenständige Funktionen (z. B. Spurverlassenswarner) oder aber als Zusatzinformation für andere Funktionen ausgewertet werden (Sensordatenfusion).

  15. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  16. Vision and Vestibular System Dysfunction Predicts Prolonged Concussion Recovery in Children.

    Science.gov (United States)

    Master, Christina L; Master, Stephen R; Wiebe, Douglas J; Storey, Eileen P; Lockyer, Julia E; Podolak, Olivia E; Grady, Matthew F

    2018-03-01

    Up to one-third of children with concussion have prolonged symptoms lasting beyond 4 weeks. Vision and vestibular dysfunction is common after concussion. It is unknown whether such dysfunction predicts prolonged recovery. We sought to determine which vision or vestibular problems predict prolonged recovery in children. A retrospective cohort of pediatric patients with concussion. A subspecialty pediatric concussion program. Four hundred thirty-two patient records were abstracted. Presence of vision or vestibular dysfunction upon presentation to the subspecialty concussion program. The main outcome of interest was time to clinical recovery, defined by discharge from clinical follow-up, including resolution of acute symptoms, resumption of normal physical and cognitive activity, and normalization of physical examination findings to functional levels. Study subjects were 5 to 18 years (median = 14). A total of 378 of 432 subjects (88%) presented with vision or vestibular problems. A history of motion sickness was associated with vestibular dysfunction. Younger age, public insurance, and presence of headache were associated with later presentation for subspecialty concussion care. Vision and vestibular problems were associated within distinct clusters. Provocable symptoms with vestibulo-ocular reflex (VOR) and smooth pursuits and abnormal balance and accommodative amplitude (AA) predicted prolonged recovery time. Vision and vestibular problems predict prolonged concussion recovery in children. A history of motion sickness may be an important premorbid factor. Public insurance status may represent problems with disparities in access to concussion care. Vision assessments in concussion must include smooth pursuits, saccades, near point of convergence (NPC), and accommodative amplitude (AA). A comprehensive, multidomain assessment is essential to predict prolonged recovery time and enable active intervention with specific school accommodations and targeted rehabilitation.

  17. What is the preferred number of consecutive night shifts?

    DEFF Research Database (Denmark)

    Nabe-Nielsen, Kirsten; Jensen, Marie Aarrebo; Hansen, Åse Marie

    2016-01-01

    % preferred '2 + 2' and 26% preferred '7 + 7'. Participants, who preferred longer spells of night work experienced that night work was less demanding, found it easier to sleep at different times of the day, and were more frequently evening types compared with participants who preferred shorter spells of night...... the longer spells of night work found night work less demanding, found it easier to sleep at different times of the day, and were more frequently evening types....

  18. Reinforcement learning in computer vision

    Science.gov (United States)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  19. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  20. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  1. The secret world of shrimps: polarisation vision at its best.

    Directory of Open Access Journals (Sweden)

    Sonja Kleinlogel

    Full Text Available BACKGROUND: Animal vision spans a great range of complexity, with systems evolving to detect variations in light intensity, distribution, colour, and polarisation. Polarisation vision systems studied to date detect one to four channels of linear polarisation, combining them in opponent pairs to provide intensity-independent operation. Circular polarisation vision has never been seen, and is widely believed to play no part in animal vision. METHODOLOGY/PRINCIPAL FINDINGS: Polarisation is fully measured via Stokes' parameters--obtained by combined linear and circular polarisation measurements. Optimal polarisation vision is the ability to see Stokes' parameters: here we show that the crustacean Gonodactylus smithii measures the exact components required. CONCLUSIONS/SIGNIFICANCE: This vision provides optimal contrast-enhancement and precise determination of polarisation with no confusion states or neutral points--significant advantages. Linear and circular polarisation each give partial information about the polarisation of light--but the combination of the two, as we will show here, results in optimal polarisation vision. We suggest that linear and circular polarisation vision not be regarded as different modalities, since both are necessary for optimal polarisation vision; their combination renders polarisation vision independent of strongly linearly or circularly polarised features in the animal's environment.

  2. Time-dependent effects of dim light at night on re-entrainment and masking of hamster activity rhythms.

    Science.gov (United States)

    Frank, David W; Evans, Jennifer A; Gorman, Michael R

    2010-04-01

    Bright light has been established as the most ubiquitous environmental cue that entrains circadian timing systems under natural conditions. Light equivalent in intensity to moonlight (dim nighttime illumination accelerated re-entrainment of hamster activity rhythms to 4-hour phase advances and delays of an otherwise standard laboratory photocycle. The purpose of this study was to determine if a sensitive period existed in the night during which dim illumination had a robust influence on speed of re-entrainment. Male Siberian hamsters were either exposed to dim light throughout the night, for half of the night, or not at all. Compared to dark nights, dim illumination throughout the entire night decreased by 29% the time for the midpoint of the active phase to re-entrain to a 4-hour phase advance and by 26% for a 4-hour delay. Acceleration of advances and delays were also achieved with 5 hours of dim light per night, but effects depended on whether dim light was present in the first half, second half, or first and last quarters of the night. Both during phase shifting and steady-state entrainment, partially lit nights also produced strong positive and negative masking effects, as well as entrainment aftereffects in constant darkness. Thus, even in the presence of a strong zeitgeber, light that might be encountered under a natural nighttime sky potently modulates the circadian timing system of hamsters.

  3. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  4. Night Rover Challenge

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the Night Rover Challenge was to foster innovations in energy storage technology. Specifically, this challenge asked competitors to create an energy...

  5. Reduced Tolerance to Night Shift in Chronic Shift Workers: Insight From Fractal Regulation.

    Science.gov (United States)

    Li, Peng; Morris, Christopher J; Patxot, Melissa; Yugay, Tatiana; Mistretta, Joseph; Purvis, Taylor E; Scheer, Frank A J L; Hu, Kun

    2017-07-01

    Healthy physiology is characterized by fractal regulation (FR) that generates similar structures in the fluctuations of physiological outputs at different time scales. Perturbed FR is associated with aging and age-related pathological conditions. Shift work, involving repeated and chronic exposure to misaligned environmental and behavioral cycles, disrupts circadian coordination. We tested whether night shifts perturb FR in motor activity and whether night shifts affect FR in chronic shift workers and non-shift workers differently. We studied 13 chronic shift workers and 14 non-shift workers as controls using both field and in-laboratory experiments. In the in-laboratory study, simulated night shifts were used to induce a misalignment between the endogenous circadian pacemaker and the sleep-wake cycles (ie, circadian misalignment) while environmental conditions and food intake were controlled. In the field study, we found that FR was robust in controls but broke down in shift workers during night shifts, leading to more random activity fluctuations as observed in patients with dementia. The night shift effect was present even 2 days after ending night shifts. The in-laboratory study confirmed that night shifts perturbed FR in chronic shift workers and showed that FR in controls was more resilience to the circadian misalignment. Moreover, FR during real and simulated night shifts was more perturbed in those who started shift work at older ages. Chronic shift work causes night shift intolerance, which is probably linked to the degraded plasticity of the circadian control system. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  6. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  7. Night sky a falcon field guide

    CERN Document Server

    Nigro, Nicholas

    2012-01-01

    Night Sky: A Falcon Field Guide covers both summer and winter constellations, planets, and stars found in the northern hemisphere. Conveniently sized to fit in a pocket and featuring detailed photographs, this informative guide makes it easy to identify objects in the night sky even from one's own backyard. From information on optimal weather conditions, preferred viewing locations, and how to use key tools of the trade, this handbook will help you adeptly navigate to and fro the vast and dynamic nighttime skies, and you'll fast recognize that the night sky's the limit.

  8. Relationship between thyroid stimulating hormone and night shift work.

    Science.gov (United States)

    Moon, So-Hyun; Lee, Bum-Joon; Kim, Seong-Jin; Kim, Hwan-Cheol

    2016-01-01

    Night shift work has well-known adverse effects on health. However, few studies have investigated the relationship between thyroid diseases and night shift work. This study aimed to examine night shift workers and their changes in thyroid stimulating hormones (TSH) levels over time. Medical check-up data (2011-2015) were obtained from 967 female workers at a university hospital in Incheon, Korea. Data regarding TSH levels were extracted from the records, and 2015 was used as a reference point to determine night shift work status. The relationships between TSH levels and night shift work in each year were analyzed using the general linear model (GLM). The generalized estimating equation (GEE) was used to evaluate the repeated measurements over the 5-year period. The GEE analysis revealed that from 2011 to 2015, night shift workers had TSH levels that were 0.303 mIU/L higher than the levels of non-night shift workers (95 % CI: 0.087-0.519 mIU/L, p  = 0.006) after adjusting for age and department. When we used TSH levels of 4.5 ≥ mIU/L to identify subclinical hypothyroidism, night shift workers exhibited a 1.399 fold higher risk of subclinical hypothyroidism (95 % CI: 1.050-1.863, p  = 0.022), compared to their non-night shift counterparts. This result of this study suggests that night shift workers may have an increased risk of thyroid diseases, compared to non-night shift workers.

  9. ABCs of foveal vision

    Science.gov (United States)

    Matchko, Roy M.; Gerhart, Grant R.

    2001-12-01

    This paper presents a simple mathematical performance model of the human foveal vision system based on an extensive analysis of the Blackwell-McCready (BM) data set. It includes a closed-form equation, the (ABC)t law, that allows the analyst to predict the entire range of BM threshold data. Relationships are derived among the four fundamental parameters of foveal vision: target area A, background luminance B, threshold contrast C, and stimulus presentation time t. Hyperbolic-curve fits on log-log plots of the data lead to the well-known laws of Ricco, Blackwell, Weber and Fechner, and Bloch. This paper unifies important relationships associated with target and background scene parameters as they relate to the human foveal vision process. The process of detecting a BM target, using foveal vision, is reduced to the total temporal summation of light energy modified by a multiplicative energy ratio. A stochastic model of human observer performance is presented in terms of a cumulative Gaussian distribution, which is a function of the apparent and BM contrast threshold values.

  10. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    Science.gov (United States)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  11. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    Science.gov (United States)

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  13. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  14. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  15. Low night temperature effect on photosynthate translocation of two C4 grasses.

    Science.gov (United States)

    Potvin, C; Strain, B R; Goeschl, J D

    1985-10-01

    Translocation of assimilates in plants of Echinochloa crus-galli, from Quebec and Mississippi, and of Eleusine indica from Mississippi was monitored, before and after night chilling, using radioactive tracing with the short-life isotope 11 C. Plants were grown at 28°/22°C (day/night temperatures) under either 350 or 675 μl·l -1 CO 2 . Low night temperature reduced translocation mainly by increasing the turn-over times of the export pool. E. crus-galli plants from Mississippi were the most susceptible to chilling; translocation being completely inhibited by exposure for one night to 7°C at 350 μl·l -1 CO 2 . Overall, plants from Quebec were the most tolerant to chilling-stress. For plants of all three populations, growth under CO 2 enrichment resulted in higher 11 C activity in the leaf phloem. High CO 2 concentrations also seemed to buffer the transport system against chilling injuries.

  16. Vision-Inspection System for Residue Monitoring of Ready-Mixed Concrete Trucks

    Directory of Open Access Journals (Sweden)

    Deok-Seok Seo

    2015-01-01

    Full Text Available The objective of this study is to propose a vision-inspection system that improves the quality management for ready-mixed concrete (RMC. The proposed system can serve as an alternative to the current visual inspection method for the detection of residues in agitator drum of RMC truck. To propose the system, concept development and the system-level design should be executed. The design considerations of the system are derived from the hardware properties of RMC truck and the conditions of RMC factory, and then 6 major components of the system are selected in the stage of system level design. The prototype of system was applied to a real RMC plant and tested for verification of its utility and efficiency. It is expected that the proposed system can be employed as a practical means to increase the efficiency of quality management for RMC.

  17. Night time cooling by ventilation or night sky radiation combined with in-room radiant cooling panels including phase change materials

    DEFF Research Database (Denmark)

    Bourdakis, Eleftherios; Olesen, Bjarne W.; Grossule, Fabio

    Night sky radiative cooling technology using PhotoVoltaic/Thermal panels (PVT) and night time ventilation have been studied both by means of simulations and experiments to evaluate their potential and to validate the created simulation model used to describe it. An experimental setup has been...... depending on the sky clearness. This cooling power was enough to remove the stored heat and regenerate the ceiling panels. The validation simulation model results related to PCM were close to the corresponding results extracted from the experiment, while the results related to the production of cold water...... through the night sky radiative cooling differed significantly. The possibility of night time ventilation was studied through simulations for three different latitudes. It was concluded that for Danish climatic conditions night time ventilation would also be able to regenerate the panels while its...

  18. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  19. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  20. Photometric Assessment of Night Sky Quality over Chaco Culture National Historical Park

    Science.gov (United States)

    Hung, Li-Wei; Duriscoe, Dan M.; White, Jeremy M.; Meadows, Bob; Anderson, Sharolyn J.

    2018-06-01

    The US National Park Service (NPS) characterizes night sky conditions over Chaco Culture National Historical Park using measurements in the park and satellite data. The park is located near the geographic center of the San Juan Basin of northwestern New Mexico and the adjacent Four Corners state. In the park, we capture a series of night sky images in V-band using our mobile camera system on nine nights from 2001 to 2016 at four sites. We perform absolute photometric calibration and determine the image placement to obtain multiple 45-million-pixel mosaic images of the entire night sky. We also model the regional night sky conditions in and around the park based on 2016 VIIRS satellite data. The average zenith brightness is 21.5 mag/arcsec2, and the whole sky is only ~16% brighter than the natural conditions. The faintest stars visible to naked eyes have magnitude of approximately 7.0, reaching the sensitivity limit of human eyes. The main impacts to Chaco’s night sky quality are the light domes from Albuquerque, Rio Rancho, Farmington, Bloomfield, Gallup, Santa Fe, Grants, and Crown Point. A few of these light domes exceed the natural brightness of the Milky Way. Additionally, glare sources from oil and gas development sites are visible along the north and east horizons. Overall, the night sky quality at Chaco Culture National Historical Park is very good. The park preserves to a large extent the natural illumination cycles, providing a refuge for crepuscular and nocturnal species. During clear and dark nights, visitors have an opportunity to see the Milky Way from nearly horizon to horizon, complete constellations, and faint astronomical objects and natural sources of light such as the Andromeda Galaxy, zodiacal light, and airglow.

  1. Creating photorealistic virtual model with polarization-based vision system

    Science.gov (United States)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  2. The impact of sleep deprivation on surgeons' performance during night shifts.

    Science.gov (United States)

    Amirian, Ilda

    2014-09-01

    The median incidence of adverse events that may result in patient injury is a total of 9% of all in-hospital admissions. In order to reduce this high incidence initiatives are continuously worked on that can reduce the risk of patient harm during admission by strengthening hospital systems. However, the influence of physicians' shift work on the risk on adverse events in patients remains controversial. In the studies included in this PhD thesis we wished to examine the impact of sleep deprivation and circadian rhythm disturbances on surgeons' during night shifts. Further we wished to examine the impact sleep deprivation had on surgeons' performance as a measure of how patient safety would be affected. We found that sleep deprivation subjectively had an impact on the surgeons and that they were aware of the effect fatigue had on their work performance. As a result they applied different mechanisms to cope with fatigue. Attending surgeons felt that they had a better overview now, due to more experience and better skills, than when they were residents, despite the fatigue on night shifts. We monitored surgeons' performance during night shifts by laparoscopic simulation and cognitive tests in order to assess their performance; no deterioration was found when pre call values were compared to on call values. The surgeons were monitored prospectively for 4 days across a night shift in order to assess the circadian rhythm and sleep. We found that surgeons' circadian rhythm was affected by working night shifts and their sleep pattern altered, resembling that of shift workers on the post call day. We assessed the quality of admission in medical records as a measure of surgeons' performance, during day, evening and night hours and found no deterioration in the quality of night time medical records. However, consistent high errors were found in several categories. These findings should be followed up in the future with respect of clarifying mechanism and consequences for

  3. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  4. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  5. Grounding Our Vision: Brain Research and Strategic Vision

    Science.gov (United States)

    Walker, Mike

    2011-01-01

    While recognizing the value of "vision," it could be argued that vision alone--at least in schools--is not enough to rally the financial and emotional support required to translate an idea into reality. A compelling vision needs to reflect substantive, research-based knowledge if it is to spark the kind of strategic thinking and insight…

  6. [Shift and night work and mental health].

    Science.gov (United States)

    Sancini, Angela; Ciarrocca, Manuela; Capozzella, Assunta; Corbosiero, Paola; Fiaschetti, Maria; Caciari, Tiziana; Cetica, Carlotta; Scimitto, Lara; Ponticiello, Barnaba Giuseppina; Tasciotti, Zaira; Schifano, Maria Pia; Andreozzit, Giorgia; Tomei, Francesco; Tomei, Gianfranco

    2012-01-01

    Aim of our study was to evaluate the influence that shift work and night work could have on mental health. A review of literary articles from 1990 to 2011 on shift work and night work was carried out. The results of this review confirmed that the shift work and night work affect mental health with the onset of neuropsychological disorders such as mood disorders, anxiety, nervousness, depressive anxiety syndromes, chronic fatigue and chronic insomnia irritability, sleep disturbances, reduction in levels of attention, cognitive impairments, alteration of circadian rhythm. Night work and shift work cause severe desynchronization of the cronobiological rhythms and a disruption of social life with negative effects on performance at work, on health and on social relationships. In the light of these results and recognizing shift work and night work as risk factors for the health of workers is necessary to implement preventive and periodic health checks by the occupational doctor to ensure the health and safety of workers taking account of the different environmental and individual factors.

  7. Evolution of Vision

    Science.gov (United States)

    Ostrovsky, Mikhail

    The evolution of photoreception, giving rise to eye, offers a kaleidoscopic view on selection acting at both the organ and molecular levels. The molecular level is mainly considered in the lecture. The greatest progress to date has been made in relation to the opsin visual pigments. Opsins appeared before eyes did. Two- and three-dimensional organization for rhodopsin in the rod outer segment disk membrane, as well as molecular mechanisms of visual pigments spectral tuning, photoisomerization and also opsin as a G-protein coupled receptor are considered. Molecular mechanisms of visual pigments spectral tuning, namely switching of chromophore (physiological time scale) and amino acid changes in the chromophore site of opsin (evolutionary time scale) is considered in the lecture. Photoisomerization of rhodopsin chromophore, 11-cis retinal is the only photochemical reaction in vision. The reaction is extemely fast (less that 200 fs) and high efficient (. is 0.65). The rhodopsin photolysis and kinetics of the earlier products appearance, photo- and bathorhodopsin, is considered. It is known that light is not only a carrier of information, but also a risk factor of damage to the eye. This photobiological paradox of vision is mainly due to the nature of rhodopsin chromophore. Photooxidation is the base of the paradox. All factors present in the phototrceptor cells to initiate free-radical photooxidation: photosensitizers, oxygen and substrates of oxidation: lipids and proteins (opsin). That is why photoprotective system of the eye structures appeared in the course of evolution. Three lines of protective system to prevent light damage to the retina and retina pigment epithelium is known: permanent renewal of rod and cone outer segment, powerful antioxidant system and optical media as cut-off filters where the lens is a key component. The molecular mechanisms of light damage to the eye and photoprotective system of the eye is considered in the lecture. The molecular

  8. Laser illumination and EO systems for covert surveillance from NIR to SWIR and beyond

    Science.gov (United States)

    Dvinelis, Edgaras; Žukauskas, Tomas; Kaušylas, Mindaugas; Vizbaras, Augustinas; Vizbaras, Kristijonas; Vizbaras, Dominykas

    2016-10-01

    One of the most important factor of success in battlefield is the ability to remain undetected by the opposing forces while also having an ability to detect all possible threats. Illumination and pointing systems working in NIR and SWIR bands are presented. Wavelengths up to 1100 nm can be registered by newest generation image intensifier tubes, CCD and EMCCD sensors. Image intensifier tubes of generation III or older are only limited up to wavelength of 900 nm [1]. Longer wavelengths of 1550 nm and 1625 nm are designed to be used with SWIR electro-optical systems and they cannot be detected by any standard night vision system. Long range SWIR illuminators and pointers have beam divergences down to 1 mrad and optical powers up to 1.5 W. Due to lower atmospheric scattering SWIR illuminators and pointers can be used at extremely long distances up to 10s of km and even further during heavy weather conditions. Longer wavelengths of 2100 nm and 2450 nm are also presented, this spectrum band is of great interest for direct infrared countermeasure (DIRCM) applications. State-of-the-art SWIR and LWIR electro-optical systems are presented. Sensitive InGaAs sensors coupled with "fast" (low F/#) optical lenses can provide complete night vision, detection of all NIR and SWIR laser lines, penetration through smoke, dust and fog. Finally beyond-state-of-the-art uncooled micro-bolometer LWIR systems are presented featuring ultra-high sensor sensitivities of 20 mK.

  9. NIGHT SKY BRIGHTNESS ABOVE ZAGREB 2012.-2017.

    Directory of Open Access Journals (Sweden)

    Željko Andreić

    2018-01-01

    Full Text Available The night sky brightness at the RGN site (near the centre of Zagreb, Croatia was monitored form January 2012. to December 2017. The gathered data show that the average night sky brightness in this period did not change significantly, apart from differences caused by yearly variations in meteorological parameters. The nightly minima, maxima and mean values of the sky brightness do change considerably due to changes in meteorological conditions, often being between 2 and 3 magnitudes. The seasonal probability curves and histograms are constructed and are used to obtain additional information on the light pollution at the RGN site. They reveal that the night sky brightness clutters around two peaks, at about 15.0 mag/arcsec2 and at about 18.2 mag/arcsec2. The tendency to slightly lower brightness values in spring and summer can also be seen in the data. Two peaks correspond to cloudy and clear nights respectively, the difference in brightness between them being about 3 magnitudes. A crude clear/cloudy criterion can be defined too: the minimum between two peaks is around 16.7 mag/arcsec2. The brightness values smaller than thisare attributed to clear nights and vice-versa. Comparison with Vienna and Hong-Kong indicates that the light pollution of Zagreb is a few times larger.

  10. Night shift decreases cognitive performance of ICU physicians.

    Science.gov (United States)

    Maltese, François; Adda, Mélanie; Bablon, Amandine; Hraeich, Sami; Guervilly, Christophe; Lehingue, Samuel; Wiramus, Sandrine; Leone, Marc; Martin, Claude; Vialet, Renaud; Thirion, Xavier; Roch, Antoine; Forel, Jean-Marie; Papazian, Laurent

    2016-03-01

    The relationship between tiredness and the risk of medical errors is now commonly accepted. The main objective of this study was to assess the impact of an intensive care unit (ICU) night shift on the cognitive performance of a group of intensivists. The influence of professional experience and the amount of sleep on cognitive performance was also investigated. A total of 51 intensivists from three ICUs (24 seniors and 27 residents) were included. The study participants were evaluated after a night of rest and after a night shift according to a randomized order. Four cognitive skills were tested according to the Wechsler Adult Intelligence Scale and the Wisconsin Card Sorting Test. All cognitive abilities worsened after a night shift: working memory capacity (11.3 ± 0.3 vs. 9.4 ± 0.3; p night shift. The cognitive abilities of intensivists were significantly altered following a night shift in the ICU, regardless of either the amount of professional experience or the duration of sleep during the shift. The consequences for patients' safety and physicians' health should be further evaluated.

  11. Night-Time Light Dynamics during the Iraqi Civil War

    Directory of Open Access Journals (Sweden)

    Xi Li

    2018-06-01

    Full Text Available In this study, we analyzed the night-time light dynamics in Iraq over the period 2012–2017 by using Visible Infrared Imaging Radiometer Suite (VIIRS monthly composites. The data quality of VIIRS images was improved by repairing the missing data, and the Night-time Light Ratio Indices (NLRIs, derived from urban extent map and night-time light images, were calculated for different provinces and cities. We found that when the Islamic State of Iraq and Syria (ISIS attacked or occupied a region, the region lost its light rapidly, with the provinces of Al-Anbar, At-Ta’min, Ninawa, and Sala Ad-din losing 63%, 73%, 88%, and 56%, of their night-time light, respectively, between December 2013 and December 2014. Moreover, the light returned after the Iraqi Security Forces (ISF recaptured the region. In addition, we also found that the night-time light in the Kurdish Autonomous Region showed a steady decline after 2014, with the Arbil, Dihok, and As-Sulaymaniyah provinces losing 47%, 18%, and 31% of their night-time light between December 2013 and December 2016 as a result of the economic crisis in the region. The night-time light in Southern Iraq, the region controlled by Iraqi central government, has grown continuously; for example, the night-time light in Al Basrah increased by 75% between December 2013 and December 2017. Regions formerly controlled by ISIS experienced a return of night-time light during 2017 as the ISF retook almost all this territory in 2017. This indicates that as reconstruction began, electricity was re-supplied in these regions. Our analysis shows the night-time light in Iraq is directly linked to the socioeconomic dynamics of Iraq, and demonstrates that the VIIRS monthly night-time light images are an effective data source for tracking humanitarian disasters in that country.

  12. Age-Related Psychophysical Changes and Low Vision

    Science.gov (United States)

    Dagnelie, Gislin

    2013-01-01

    When considering the burden of visual impairment on aging individuals and society at large, it is important to bear in mind that vision changes are a natural aspect of aging. In this article, we consider vision changes that are part of normal aging, the prevalence of abnormal vision changes caused by disorders of the visual system, and the anticipated incidence and impact of visual impairment as the US population ages. We then discuss the services available to reduce the impact of vision loss, and the extent to which those services can and should be improved, not only to be better prepared for the anticipated increase in low vision over the coming decades, but also to increase the awareness of interactions between visual impairment and comorbidities that are common among the elderly. Finally, we consider how to promote improved quality, availability, and acceptance of low vision care to lessen the impact of visual impairment on individuals, and its burden on society. PMID:24335074

  13. Night shift work characteristics and occupational co-exposures in industrial plants in Łódź, Poland

    Directory of Open Access Journals (Sweden)

    Beata Pepłońska

    2013-08-01

    Full Text Available Objectives: Night shift work involving circadian rhythm disruption has been classified by IARC as a probably carcinogenic to humans (Group 2A. Little is known about co-exposures of the night shift work in occupational settings. The aim of our study was to characterize night shift work systems and industrial exposures occurring in the manufacturing plants in Łódź, Poland, where night shift work system operates, with particular focus on potential carcinogens. Material and Methods: Data on the night shift work systems and hazardous agents were collected through survey performed in 44 enterprises. The identified hazardous agents were checked using the IARC carcinogen list, and the harmonized EU classification of chemical substances. We also examined databases of the Central Register of Data on exposure to substances, preparations, agents and technological processes showing carcinogenic or mutagenic properties in Poland. Results: The most common system of work among studied enterprises employed 3 (8-hour shifts within a 5-day cycle. We identified as many as 153 hazards occurring in the environment of the plants, with noise, carbon monoxide and formaldehyde recorded as the most common ones. Out of these hazards, 11 agents have been classified by IARC to group 1 - carcinogenic to humans, whereas 10 agents have been classified as carcino - gens by the regulation of European Classification of carcinogens. Analysis of the data from the Central Register revealed that 6 plants reported presence of carcinogens in the environment of work. Conclusions: In our study we observed that in none of the workplaces the night shift work was a single exposure. Further epidemiological studies focusing on investigation of health effects of the night shift work should identify occupational co -exposures and examine them as potential confounders.

  14. The city at night (the case of Maribor, Slovenia

    Directory of Open Access Journals (Sweden)

    Vladimir Drozg

    2016-12-01

    Full Text Available This paper focuses on the city at night. The distinctive aspect of the discussed topic is the time dimension of spaces and areas – places that “live” at night. The night has economic, cultural, social and formal elements; and it is these elements that underpin how we see and come to know the city at night. A range of topics have been explored: places of retailing and consumption, workplaces, places of entertainment, places that embody the night image of the city and places of socially unacceptable, delinquent behaviour. In the empirical part, we examined the city of Maribor, Slovenia.

  15. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  16. Optical methods for the optimization of system SWaP-C using aspheric components and advanced optical polymers

    Science.gov (United States)

    Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell

    2013-06-01

    We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.

  17. Night lights and regional income inequality in Africa

    DEFF Research Database (Denmark)

    Mveyange, Anthony Francis

    Estimating regional income inequality in Africa has been challenging due to the lack of reliable and consistent sub-national income data. I employ night lights data to circumvent this limitation. I find significant and positive associations between regional inequality visible through night lights...... and income in Africa. Thus, in the absence of income data, we can construct regional inequality proxies using night lights data. Further investigation on the night lights-based regional inequality trends reveals two main findings: first, increasing regional inequality trends between 1992 and 2003; and second......, declining regional inequality trends between 2004 and 2012....

  18. Image segmentation for enhancing symbol recognition in prosthetic vision.

    Science.gov (United States)

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  19. Dry eye signs and symptoms in night-time workers

    OpenAIRE

    Ali Makateb; Hamed Torabifard

    2017-01-01

    Purpose: To determine the effect of night-time working on dry eye signs and symptoms. Methods: A total of 50 healthy subjects completed a dry eye questionnaire and underwent clinical examinations including basic Schirmer's test and tear breakup time (TBUT) test on two consecutive days, before and after the night shift (12-hrs night-shift). Results: All dry eye symptoms were aggravated significantly after the night shift (P 

  20. Solar Neutrino Day-Night Effect

    International Nuclear Information System (INIS)

    Blennow, Mattias; Ohlsson, Tommy; Snellman, Hakan

    2005-01-01

    We summarize the results of Ref. [M. Blennow, T. Ohlsson and H. Snellman, Phys. Rev. D 69 (2004) 073006, hep-ph/0311098] in which we determine the effects of three flavor mixing on the day-night asymmetry in the flux of solar neutrinos. Analytic methods are used to determine the difference in the day and night solar electron neutrino survival probabilities and numerical methods are used to determine the effect of three flavor mixing at detectors

  1. 78 FR 22848 - 36(b)(1) Arms Sales Notification

    Science.gov (United States)

    2013-04-17

    ... Spare Guidance Sections, 18 AN/AVS-9(V) Night Vision Goggles, H-764G with GEM V Selective Availability... Simulator (PASIS), 10 AMRAAM Spare Guidance Sections, 18 AN/AVS-9(V) Night Vision Goggles, H-764G with GEM V... other documentation up to Secret. 2. The AN/AVS-9 Night Vision Goggles (NVG) are 3rd generation aviation...

  2. Exercise attenuates the metabolic effects of dim light at night.

    Science.gov (United States)

    Fonken, Laura K; Meléndez-Fernández, O Hecmarie; Weil, Zachary M; Nelson, Randy J

    2014-01-30

    Most organisms display circadian rhythms that coordinate complex physiological and behavioral processes to optimize energy acquisition, storage, and expenditure. Disruptions to the circadian system with environmental manipulations such as nighttime light exposure alter metabolic energy homeostasis. Exercise is known to strengthen circadian rhythms and to prevent weight gain. Therefore, we hypothesized providing mice a running wheel for voluntary exercise would buffer against the effects of light at night (LAN) on weight gain. Mice were maintained in either dark (LD) or dim (dLAN) nights and provided either a running wheel or a locked wheel. Mice exposed to dim, rather than dark, nights increased weight gain. Access to a functional running wheel prevented body mass gain in mice exposed to dLAN. Voluntary exercise appeared to limit weight gain independently of rescuing changes to the circadian system caused by dLAN; increases in daytime food intake induced by dLAN were not diminished by increased voluntary exercise. Furthermore, although all of the LD mice displayed a 24h rhythm in wheel running, nearly half (4 out of 9) of the dLAN mice did not display a dominant 24h rhythm in wheel running. These results indicate that voluntary exercise can prevent weight gain induced by dLAN without rescuing circadian rhythm disruptions. © 2013.

  3. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  4. Science by night – it's magic!

    CERN Document Server

    CERN Bulletin

    2010-01-01

    The control rooms of the LHC and its experiments threw open their doors to 150 youngsters on European Researchers Night and the place was buzzing with excitement all evening!    It's just possible that a few scientists' vocations were born last Friday night, as the sixth European Researchers Night took place across Europe. CERN was taking part for the first time and invited young people aged from 12 to 19 into the control rooms of the LHC machine and five experiments. From 5.00 in the afternoon until 1.00 in the morning, 150 youngsters and physics teachers got the opportunity to sit with scientists at the controls of the accelerator and experiments. This meeting of minds went down very well for all concerned, the scientists being only too happy to wax lyrical about their passion. The youngsters were thrilled with their visit and amazed at being allowed so close to the controls of these mighty machines. The night-time setting added an extra touch of magic to the whole event. Some just could...

  5. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  6. Making a vision document tangible using "vision-tactics-metrics" tables.

    Science.gov (United States)

    Drury, Ivo; Slomski, Carol

    2006-01-01

    We describe a method of making a vision document tangible by attaching specific tactics and metrics to the key elements of the vision. We report on the development and early use of a "vision-tactics-metrics" table in a department of surgery. Use of the table centered the vision in the daily life of the department and its faculty, and facilitated cultural change.

  7. Illumination Effect of Laser Light in Foggy Objects Using an Active Imaging System

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Ahn, Yong-Jin; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Active imaging techniques usually provide improved image information when compared to passive imaging techniques. Active vision is a direct visualization technique using an artificial illuminant. Range-gated imaging (RGI) technique is one of active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The Range-gated imaging is an emerging technology in the field of surveillance for security application, especially in the visualization of darken night or foggy environment. Although RGI viewing was discovered in the 1960's, this technology is currently more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. Especially, this system can be adopted in robot-vision system by virtue of the compact system configuration. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated range imaging based on range-gated imaging. Laser light having a short pulse width is usually used for the range-gated imaging system. In this paper, an illumination effect of laser light in foggy objects is studied using a range-gated imaging system. The used imaging system consists of an ultra-short pulse (0.35 ns) laser light and a gated imaging sensor. The experiment is carried out to monitor objects in a box filled by fog. In this paper, the effects by fog particles in range-gated imaging technique are studied. Edge blurring and range distortion are the generated by fog particles.

  8. Illumination Effect of Laser Light in Foggy Objects Using an Active Imaging System

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Ahn, Yong-Jin; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Active imaging techniques usually provide improved image information when compared to passive imaging techniques. Active vision is a direct visualization technique using an artificial illuminant. Range-gated imaging (RGI) technique is one of active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The Range-gated imaging is an emerging technology in the field of surveillance for security application, especially in the visualization of darken night or foggy environment. Although RGI viewing was discovered in the 1960's, this technology is currently more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. Especially, this system can be adopted in robot-vision system by virtue of the compact system configuration. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated range imaging based on range-gated imaging. Laser light having a short pulse width is usually used for the range-gated imaging system. In this paper, an illumination effect of laser light in foggy objects is studied using a range-gated imaging system. The used imaging system consists of an ultra-short pulse (0.35 ns) laser light and a gated imaging sensor. The experiment is carried out to monitor objects in a box filled by fog. In this paper, the effects by fog particles in range-gated imaging technique are studied. Edge blurring and range distortion are the generated by fog particles

  9. The reported incidence of man-machine interface issues in Army aviators using the Aviator's Night Vision System (ANVIS) in a combat theatre

    Science.gov (United States)

    Hiatt, Keith L.; Rash, Clarence E.

    2011-06-01

    Background: Army Aviators rely on the ANVIS for night operations. Human factors literature notes that the ANVIS man-machine interface results in reports of visual and spinal complaints. This is the first study that has looked at these issues in the much harsher combat environment. Last year, the authors reported on the statistically significant (pEnduring Freedom (OEF). Results: 82 Aircrew (representing an aggregate of >89,000 flight hours of which >22,000 were with ANVIS) participated. Analysis demonstrated high complaints of almost all levels of back and neck pain. Additionally, the use of body armor and other Aviation Life Support Equipment (ALSE) caused significant ergonomic complaints when used with ANVIS. Conclusions: ANVIS use in a combat environment resulted in higher and different types of reports of spinal symptoms and other man-machine interface issues over what was previously reported. Data from this study may be more operationally relevant than that of the peacetime literature as it is derived from actual combat and not from training flights, and it may have important implications about making combat predictions based on performance in training scenarios. Notably, Aircrew remarked that they could not execute the mission without ANVIS and ALSE and accepted the degraded ergonomic environment.

  10. Night Shift Work and Risk of Breast Cancer.

    Science.gov (United States)

    Hansen, Johnni

    2017-09-01

    Night work is increasingly common and a necessity in certain sectors of the modern 24-h society. The embedded exposure to light-at-night, which suppresses the nocturnal hormone melatonin with oncostatic properties and circadian disruption, i.e., misalignment between internal and external night and between cells and organs, are suggested as main mechanisms involved in carcinogenesis. In 2007, the International Agency for Research on Cancer (IARC) classified shift work that involves circadian disruption as probably carcinogenic to humans based on limited evidence from eight epidemiologic studies on breast cancer, in addition to sufficient evidence from animal experiments. The aim of this review is a critical update of the IARC evaluation, including subsequent and the most recent epidemiologic evidence on breast cancer risk after night work. After 2007, in total nine new case-control studies, one case-cohort study, and eight cohort studies are published, which triples the number of studies. Further, two previous cohorts have been updated with extended follow-up. The assessment of night shift work is different in all of the 26 existing studies. There is some evidence that high number of consecutive night shifts has impact on the extent of circadian disruption, and thereby increased breast cancer risk, but this information is missing in almost all cohort studies. This in combination with short-term follow-up of aging cohorts may explain why some cohort studies may have null findings. The more recent case-control studies have contributed interesting results concerning breast cancer subtypes in relation to both menopausal status and different hormonal subtypes. The large differences in definitions of both exposure and outcome may contribute to the observed heterogeneity of results from studies of night work and breast cancer, which overall points in the direction of an increased breast cancer risk, in particular after over 20 years of night shifts. Overall, there is a

  11. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  12. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  13. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  14. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  15. Night-time warming and the greenhouse effect

    International Nuclear Information System (INIS)

    Kukla, G.; Karl, T.R.

    1993-01-01

    Studies of temperature data collected mainly from rural stations in North America, China, the Commonwealth of Independent States, Australia, Sudan, Japan, Denmark, Northern Finland, several Pacific Islands, Pakistan, South Africa and Europe suggest that the reported warming of the Northern Hemisphere since WWII is principally a result of an increase in night-time temperatures. The average monthly maximum and minimum temperatures, as well as the mean diurnal temperature range (DTR), were calculated for various regions from data supplied by 1000 stations from 1951 to 1990. Average and minimum temperatures generally rose during the analysed interval and the rise in night-time temperatures was more pronounced than the increase in daily maximum temperatures. As a result, the mean DTR decreased almost everywhere. The most probable causes of the rise in night-time temperatures are: an increase in cloudiness owing to natural changes in the circulation patterns of oceans and the atmosphere; increased cloud cover density caused by industrial pollution; urban heat islands, generated by cities, which are strongest during the night; irrigation which keeps the surface warmer at night and cooler by day; and anthropogenic greenhouse gases. 18 refs., 3 figs

  16. A digital retina-like low-level vision processor.

    Science.gov (United States)

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  17. The Citizen-Scientist as Data Collector: GLOBE at Night, Part 2

    Science.gov (United States)

    Walker, C. E.; Pompea, S. M.; Ward, D.; Henderson, S.; Meymaris, K.; Gallagher, S.; Salisbury, D.

    2006-12-01

    event, the GLOBE at Night team is eager to offer it again from March 8-21, 2007. For more information, see www.globe.gov/GaN or contact globeatnight@globe.gov or outreach@noao.edu. GLOBE at Night is a collaboration between The GLOBE Program, the National Optical Astronomy Observatory (NOAO), Centro de Apoyo a la Didactica de la Astronomia (CADIAS), Windows to the Universe, and Environmental Systems Research Institute, Inc. (ESRI). NOAO is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under cooperative agreement with the National Science Foundation.

  18. Light Pollution Awareness through Globe at Night & IYL2015

    Science.gov (United States)

    Walker, Constance E.

    2015-01-01

    The International Astronomical Union (IAU) will be coordinating extensive activities to raise awareness of light pollution through running the Cosmic Light theme of the International Year of Light (IYL2015) and by partnering in particular with the popular Globe at Night program.Globe at Night (www.globeatnight.org) is an international campaign to raise public awareness of the impact of light pollution by having people measure night-sky brightness and submit observations in real-time with smart phone or later with a computer. In 2015, Globe at Night will run for 10-nights each month, an hour after sunset til before the Moon rises. Students can use the data to monitor levels of light pollution around the world, as well as understand light pollution's effects on energy consumption, plants, wildlife, human health and our ability to enjoy a starry night sky.Since its inception in 2006, more than 115,000 measurements from 115 countries have been reported. The last 9 years of data can be explored with Globe at Night's interactive world map or with the 'map app' to view a particular area. A spreadsheet of the data is downloadable from any year. One can compare Globe at Night data with a variety of other databases to see, for example, how light pollution affects the foraging habits of bats.To encourage public participation in Globe at Night during IYL2015, each month will target an area of the world that habitually contributes during that time. Special concerns for how light pollution affects that area and solutions will be featured on the Globe at Night website (www.globeatnight.org), through its Facebook page, in its newsletter or in the 365DaysofAstronomy.org podcasts.Twice during IYL there will be a global Flash Mob event, one on Super Pi Day (March 14, 2015) and a second in mid-September, where the public will be invited to take night-sky brightness measurements en masse. In April, the International Dark-Sky Week hosted by the International Dark-Sky Association will be

  19. Understanding A.M. Iqbal’s Vision on Perfect Man

    Directory of Open Access Journals (Sweden)

    Imam Bahroni

    2013-12-01

    Full Text Available This article tries to elucidate A.M Iqbal’s vision on the concept of perfectman. Its significance is at the point that Man can transform both his being andhis surroundings according to his own desires and aspirations. He actually makesimprovements upon what is created by God. God created night, he inventedthe lamp; God created clay, and from it he made the cup; God created deserts,mountains, forests, orchards, gardens, and groves. He makes glass out of stoneand turns poison into an antidote. God created the world, but he made it morebeautiful.Iqbal’s reasoning amply justifies belief in the ascendancy of man overthe universe and his predicted perfection. The perfect man is the ultimate endof the revolutionary process, and he is developed out of the present man, justas the full moon is developed from the crescent.

  20. A night with good vibrations

    CERN Multimedia

    2002-01-01

    For its third edition, the Museum d'histoire des sciences invites you to a Science Night under the banner of waves and undulations. Scientists, artists and storytellers from more than forty institutes and local or regional associations will show in only one weekend that waves and undulations form an integral part of our daily environment. Telephones, televisions, radios, irons, lighting, music, sun rays, stars, rainbows, earthquakes and other natural phenomena - all produce, emit or receive waves or undulations. The visitors attending the Night will be able to get in contact with the nature of waves through interactive exhibitions on sound and light and through hands-on demonstrations arranged around the Bartholoni villa and in the Park of the Perle du lac. An amusing and distracting way to familiarize yourself with the concepts of wavelength, frequency and interference lengths... In addition to the stands, the Night will offer many other activities: reconstructions of critical experiments, scientific consu...