WorldWideScience

Sample records for laser vision system

  1. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  2. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  3. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  4. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  5. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  6. Calibration method for a vision guiding-based laser-tracking measurement system

    International Nuclear Information System (INIS)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-01-01

    Laser-tracking measurement systems (laser trackers) based on a vision-guiding device are widely used in industrial fields, and their calibration is important. As conventional methods typically have many disadvantages, such as difficult machining of the target and overdependence on the retroreflector, a novel calibration method is presented in this paper. The retroreflector, which is necessary in the normal calibration method, is unnecessary in our approach. As the laser beam is linear, points on the beam can be obtained with the help of a normal planar target. In this way, we can determine the function of a laser beam under the camera coordinate system, while its corresponding function under the laser-tracker coordinate system can be obtained from the encoder of the laser tracker. Clearly, when several groups of functions are confirmed, the rotation matrix can be solved from the direction vectors of the laser beams in different coordinate systems. As the intersection of the laser beams is the origin of the laser-tracker coordinate system, the translation matrix can also be determined. Our proposed method not only achieves the calibration of a single laser-tracking measurement system but also provides a reference for the calibration of a multistation system. Simulations to evaluate the effects of some critical factors were conducted. These simulations show the robustness and accuracy of our method. In real experiments, the root mean square error of the calibration result reached 1.46 mm within a range of 10 m, even though the vision-guiding device focuses on a point approximately 5 m away from the origin of its coordinate system, with a field of view of approximately 200 mm  ×  200 mm. (paper)

  7. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  8. Review of technological advancements in calibration systems for laser vision correction

    Science.gov (United States)

    Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh

    2018-02-01

    Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.

  9. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  10. A Ship Cargo Hold Inspection Approach Using Laser Vision Systems

    OpenAIRE

    SHEN Yang; ZHAO Ning; LIU Haiwei; MI Chao

    2013-01-01

    Our paper represents a vision system based on the laser measurement system (LMS) for bulk ship inspection. The LMS scanner with 2-axis servo system is installed on the ship loader to build the shape of the ship. Then, a group of real-time image processing algorithms are implemented to compute the shape of the cargo hold, the inclination angle of the ship and the relative position between the ship loader and the cargo hold. Based on those computed inspection data of the ship, the ship loader c...

  11. Hi-Vision telecine system using pickup tube

    Science.gov (United States)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  12. The Light Plane Calibration Method of the Laser Welding Vision Monitoring System

    Science.gov (United States)

    Wang, B. G.; Wu, M. H.; Jia, W. P.

    2018-03-01

    According to the aerospace and automobile industry, the sheet steels are the very important parts. In the recent years, laser welding technique had been used to weld the sheet steel part. The seam width between the two parts is usually less than 0.1mm. Because the error of the fixture fixed can’t be eliminated, the welding parts quality can be greatly affected. In order to improve the welding quality, the line structured light is employed in the vision monitoring system to plan the welding path before welding. In order to improve the weld precision, the vision system is located on Z axis of the computer numerical control (CNC) tool. The planar pattern is placed on the X-Y plane of the CNC tool, and the structured light is projected on the planar pattern. The vision system stay at three different positions along the Z axis of the CNC tool, and the camera shoot the image of the planar pattern at every position. Using the calculated the sub-pixel center line of the structure light, the world coordinate of the center light line can be calculated. Thus, the structured light plane can be calculated by fitting the structured light line. Experiment result shows the effective of the proposed method.

  13. The free electron laser: a system capable of determining the gold standard in laser vision correction

    International Nuclear Information System (INIS)

    Fowler, W. Craig; Rose, John G.; Chang, Daniel H.; Proia, Alan D.

    1999-01-01

    Introduction. In laser vision correction surgery, lasers are generally utilized based on their beam-tissue interactions and corneal absorption characteristics. Therefore, the free electron laser, with its ability to provide broad wavelength tunability, is a unique research tool for investigating wavelengths of possible corneal ablation. Methods. Mark III free electron laser wavelengths between 2.94 and 6.7 μm were delivered in serial 0.1 μm intervals to corneas of freshly enucleated porcine globes. Collateral damage, ablation depth, and ablation diameter were measured in histologic sections. Results. The least collateral damage (12-13 μm) was demonstrated at three wavelengths: 6.0, 6.1 (amide I), and 6.3 μm. Minimal collateral damage (15 μm) was noted at 2.94 μm (OH-stretch) and at 6.2 μm. Slightly greater collateral damage was noted at 6.45 μm (amide II), as well as at the 5.5-5.7 μm range, but this was still substantially less than the collateral damage noted at the other wavelengths tested. Conclusions. Our results suggest that select mid-infrared wavelengths have potential for keratorefractive surgery and warrant additional study. Further, the free electron laser's ability to allow parameter adjustment in the far-ultraviolet spectrum may provide unprecedented insights toward establishing the gold-standard parameters for laser vision correction surgery

  14. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  15. Virtual environment assessment for laser-based vision surface profiling

    Science.gov (United States)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  16. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding

    Science.gov (United States)

    Nilsen, Morgan; Sikström, Fredrik; Christiansson, Anna-Karin; Ancona, Antonio

    2017-11-01

    The automated laser beam butt welding process is sensitive to positioning the laser beam with respect to the joint because a small offset may result in detrimental lack of sidewall fusion. This problem is even more pronounced in case of narrow gap butt welding, where most of the commercial automatic joint tracing systems fail to detect the exact position and size of the gap. In this work, a dual vision and spectroscopic sensing approach is proposed to trace narrow gap butt joints during laser welding. The system consists of a camera with suitable illumination and matched optical filters and a fast miniature spectrometer. An image processing algorithm of the camera recordings has been developed in order to estimate the laser spot position relative to the joint position. The spectral emissions from the laser induced plasma plume have been acquired by the spectrometer, and based on the measurements of the intensities of selected lines of the spectrum, the electron temperature signal has been calculated and correlated to variations of process conditions. The individual performances of these two systems have been experimentally investigated and evaluated offline by data from several welding experiments, where artificial abrupt as well as gradual deviations of the laser beam out of the joint were produced. Results indicate that a combination of the information provided by the vision and spectroscopic systems is beneficial for development of a hybrid sensing system for joint tracing.

  17. Simple laser vision sensor calibration for surface profiling applications

    Science.gov (United States)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  18. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  19. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  20. Fast and intuitive programming of adaptive laser cutting of lace enabled by machine vision

    Science.gov (United States)

    Vaamonde, Iago; Souto-López, Álvaro; García-Díaz, Antón

    2015-07-01

    A machine vision system has been developed, validated, and integrated in a commercial laser robot cell. It permits an offline graphical programming of laser cutting of lace. The user interface allows loading CAD designs and aligning them with images of lace pieces. Different thread widths are discriminated to generate proper cutting program templates. During online operation, the system aligns CAD models of pieces and lace images, pre-checks quality of lace cuts and adapts laser parameters to thread widths. For pieces detected with the required quality, the program template is adjusted by transforming the coordinates of every trajectory point. A low-cost lace feeding system was also developed for demonstration of full process automation.

  1. Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System

    Science.gov (United States)

    Moring, I.; Ailisto, H.; Heikkinen, T.; Kilpela, A.; Myllyla, R.; Pietikainen, M.

    1988-02-01

    In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.

  2. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  3. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  4. Development of an auto-welding system for CRD nozzle repair welds using a 3D laser vision sensor

    International Nuclear Information System (INIS)

    Park, K.; Kim, Y.; Byeon, J.; Sung, K.; Yeom, C.; Rhee, S.

    2007-01-01

    A control rod device (CRD) nozzle attaches to the hemispherical surface of a reactor head with J-groove welding. Primary water stress corrosion cracking (PWSCC) causes degradation in these welds, which requires that these defect areas be repaired. To perform this repair welding automatically on a complicated weld groove shape, an auto-welding system was developed incorporating a laser vision sensor that measures the 3-dimensional (3D) shape of the groove and a weld-path creation program that calculates the weld-path parameters. Welding trials with a J-groove workpiece were performed to establish a basis for developing this auto-welding system. Because the reactor head is placed on a lay down support, the outer-most region of the CRD nozzle has restricted access. Due to this tight space, several parameters of the design, such as size, weight and movement of the auto-welding system, had to be carefully considered. The cross section of the J-groove weld is basically an oval shape where the included angle of the J-groove ranges from 0 to 57 degrees. To measure the complex shape, we used double lasers coupled to a single charge coupled device (CCD) camera. We then developed a program to generate the weld-path parameters using the measured 3D shape as a basis. The program has the ability to determine the first and final welding positions and to calculate all weld-path parameters. An optimized image-processing algorithm was applied to resolve noise interference and diffused reflection of the joint surfaces. The auto-welding system is composed of a 4-axis manipulator, gas tungsten arc welding (GTAW) power supply, an optimized designed and manufactured GTAW torch and a 3D laser vision sensor. Through welding trials with 0 and 38-degree included-angle workpieces with both J-groove and U-groove weld, the performance of this auto-welding system was qualified for field application

  5. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    Directory of Open Access Journals (Sweden)

    Taikyeong Jeong

    2011-09-01

    Full Text Available In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS. A laser-vision sensor (LVS, consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality.

  6. Illumination Effect of Laser Light in Foggy Objects Using an Active Imaging System

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Ahn, Yong-Jin; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Active imaging techniques usually provide improved image information when compared to passive imaging techniques. Active vision is a direct visualization technique using an artificial illuminant. Range-gated imaging (RGI) technique is one of active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The Range-gated imaging is an emerging technology in the field of surveillance for security application, especially in the visualization of darken night or foggy environment. Although RGI viewing was discovered in the 1960's, this technology is currently more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. Especially, this system can be adopted in robot-vision system by virtue of the compact system configuration. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated range imaging based on range-gated imaging. Laser light having a short pulse width is usually used for the range-gated imaging system. In this paper, an illumination effect of laser light in foggy objects is studied using a range-gated imaging system. The used imaging system consists of an ultra-short pulse (0.35 ns) laser light and a gated imaging sensor. The experiment is carried out to monitor objects in a box filled by fog. In this paper, the effects by fog particles in range-gated imaging technique are studied. Edge blurring and range distortion are the generated by fog particles.

  7. Illumination Effect of Laser Light in Foggy Objects Using an Active Imaging System

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Ahn, Yong-Jin; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Active imaging techniques usually provide improved image information when compared to passive imaging techniques. Active vision is a direct visualization technique using an artificial illuminant. Range-gated imaging (RGI) technique is one of active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The Range-gated imaging is an emerging technology in the field of surveillance for security application, especially in the visualization of darken night or foggy environment. Although RGI viewing was discovered in the 1960's, this technology is currently more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. Especially, this system can be adopted in robot-vision system by virtue of the compact system configuration. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated range imaging based on range-gated imaging. Laser light having a short pulse width is usually used for the range-gated imaging system. In this paper, an illumination effect of laser light in foggy objects is studied using a range-gated imaging system. The used imaging system consists of an ultra-short pulse (0.35 ns) laser light and a gated imaging sensor. The experiment is carried out to monitor objects in a box filled by fog. In this paper, the effects by fog particles in range-gated imaging technique are studied. Edge blurring and range distortion are the generated by fog particles

  8. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  9. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  10. Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces

    Science.gov (United States)

    Altschuler, M. D.; Altschuler, B. R.; Taboada, J.

    1981-01-01

    It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.

  11. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  12. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  13. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  14. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  15. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  16. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  17. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  18. Development of an Advanced Aidman Vision Screener (AVS) for selective assessment of outer and inner laser induced retinal injury

    Science.gov (United States)

    Boye, Michael W.; Zwick, Harry; Stuck, Bruce E.; Edsall, Peter R.; Akers, Andre

    2007-02-01

    The need for tools that can assist in evaluating visual function is an essential and a growing requirement as lasers on the modern battlefield mature and proliferate. The requirement for rapid and sensitive vision assessment under field conditions produced the USAMRD Aidman Vision Screener (AVS), designed to be used as a field diagnostic tool for assessing laser induced retinal damage. In this paper, we describe additions to the AVS designed to provide a more sensitive assessment of laser induced retinal dysfunction. The AVS incorporates spectral LogMar Acuity targets without and with neural opponent chromatic backgrounds. Thus, it provides the capability of detecting selective photoreceptor damage and its functional consequences at the level of both the outer and inner retina. Modifications to the original achromatic AVS have been implemented to detect selective cone system dysfunction by providing LogMar acuity Landolt rings associated with the peak spectral absorption regions of the S (short), M (middle), and L (long) wavelength cone photoreceptor systems. Evaluation of inner retinal dysfunction associated with selective outer cone damage employs LogMar spectral acuity charts with backgrounds that are neurally opponent. Thus, the AVS provides the capability to assess the effect of selective cone dysfunction on the normal neural balance at the level of the inner retinal interactions. Test and opponent background spectra have been optimized by using color space metrics. A minimal number of three AVS evaluations will be utilized to provide an estimate of false alarm level.

  19. Adaptive Pulsed Laser Line Extraction for Terrain Reconstruction using a Dynamic Vision Sensor

    Directory of Open Access Journals (Sweden)

    Christian eBrandli

    2014-01-01

    Full Text Available Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor’s ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500Hz were achieved using a line laser of 3mW at a distance of 45cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2mm.

  20. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    Science.gov (United States)

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  1. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Junchao Tu

    2018-01-01

    Full Text Available A new solution to the problem of galvanometric laser scanning (GLS system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM. By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  2. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  3. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  4. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  5. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  6. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  7. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  8. Welding technology transfer task/laser based weld joint tracking system for compressor girth welds

    Science.gov (United States)

    Looney, Alan

    1991-01-01

    Sensors to control and monitor welding operations are currently being developed at Marshall Space Flight Center. The laser based weld bead profiler/torch rotation sensor was modified to provide a weld joint tracking system for compressor girth welds. The tracking system features a precision laser based vision sensor, automated two-axis machine motion, and an industrial PC controller. The system benefits are elimination of weld repairs caused by joint tracking errors which reduces manufacturing costs and increases production output, simplification of tooling, and free costly manufacturing floor space.

  9. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  10. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  11. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  12. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  13. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  14. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  15. Health system vision of iran in 2025.

    Science.gov (United States)

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  16. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  17. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  18. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  19. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  20. A hand-held 3D laser scanning with global positioning system of subvoxel precision

    International Nuclear Information System (INIS)

    Arias, Nestor; Meneses, Nestor; Meneses, Jaime; Gharbi, Tijani

    2011-01-01

    In this paper we propose a hand-held 3D laser scanner composed of an optical head device to extract 3D local surface information and a stereo vision system with subvoxel precision to measure the position and orientation of the 3D optical head. The optical head is manually scanned over the surface object by the operator. The orientation and position of the 3D optical head is determined by a phase-sensitive method using a 2D regular intensity pattern. This phase reference pattern is rigidly fixed to the optical head and allows their 3D location with subvoxel precision in the observation field of the stereo vision system. The 3D resolution achieved by the stereo vision system is about 33 microns at 1.8 m with an observation field of 60cm x 60cm.

  1. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  2. Analysis of the speckle properties in a laser projection system based on a human eye model.

    Science.gov (United States)

    Cui, Zhe; Wang, Anting; Ma, Qianli; Ming, Hai

    2014-03-01

    In this paper, the properties of the speckle that is observed by humans in laser projection systems are theoretically analyzed. The speckle pattern on the fovea of the human retina is numerically simulated by introducing a chromatic human eye model. The results show that the speckle contrast experienced by humans is affected by the light intensity of the projected images and the wavelength of the laser source when considering the paracentral vision. Furthermore, the image quality is also affected by these two parameters. We believe that these results are useful for evaluating the speckle noise in laser projection systems.

  3. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  4. High-precision pose measurement method in wind tunnels based on laser-aided vision technology

    Directory of Open Access Journals (Sweden)

    Liu Wei

    2015-08-01

    Full Text Available The measurement of position and attitude parameters for the isolated target from a high-speed aircraft is a great challenge in the field of wind tunnel simulation technology. In this paper, firstly, an image acquisition method for small high-speed targets with multi-dimensional movement in wind tunnel environment is proposed based on laser-aided vision technology. Combining with the trajectory simulation of the isolated model, the reasonably distributed laser stripes and self-luminous markers are utilized to capture clear images of the object. Then, after image processing, feature extraction, stereo correspondence and reconstruction, three-dimensional information of laser stripes and self-luminous markers are calculated. Besides, a pose solution method based on projected laser stripes and self-luminous markers is proposed. Finally, simulation experiments on measuring the position and attitude of high-speed rolling targets are conducted, as well as accuracy verification experiments. Experimental results indicate that the proposed method is feasible and efficient for measuring the pose parameters of rolling targets in wind tunnels.

  5. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2013-01-01

    Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.

  6. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  7. Optical Coherence Tomography–Based Corneal Power Measurement and Intraocular Lens Power Calculation Following Laser Vision Correction (An American Ophthalmological Society Thesis)

    Science.gov (United States)

    Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.

    2013-01-01

    Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323

  8. Low-level laser therapy improves vision in a patient with retinitis pigmentosa.

    Science.gov (United States)

    Ivandic, Boris T; Ivandic, Tomislav

    2014-03-01

    This case report describes the effects of low-level laser therapy (LLLT) in a single patient with retinitis pigmentosa (RP). RP is a heritable disorder of the retina, which eventually leads to blindness. No therapy is currently available. LLLT was applied using a continuous wave laser diode (780 nm, 10 mW average output at 292 Hz, 50% pulse modulation). The complete retina of eyes was irradiated through the conjunctiva for 40 sec (0.4 J, 0.333 W/cm2) two times per week for 2 weeks (1.6 J). A 55-year-old male patient with advanced RP was treated and followed for 7 years. The patient had complained of nyctalopia and decreasing vision. At first presentation, best visual acuity was 20/50 in each eye. Visual fields were reduced to a central residual of 5 degrees. Tritan-dyschromatopsy was found. Retinal potential was absent in electroretinography. Biomicroscopy showed optic nerve atrophy, and narrow retinal vessels with a typical pattern of retinal pigmentation. After four initial treatments of LLLT, visual acuity increased to 20/20 in each eye. Visual fields normalized except for a mid-peripheral absolute concentric scotoma. Five years after discontinuation of LLLT, a relapse was observed. LLLT was repeated (another four treatments) and restored the initial success. During the next 2 years, 17 additional treatments were performed on an "as needed" basis, to maintain the result. LLLT was shown to improve and maintain vision in a patient with RP, and may thereby have contributed to slowing down blindness.

  9. THE USE OF COMPUTER VISION ALGORITHMS FOR AUTOMATIC ORIENTATION OF TERRESTRIAL LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2016-06-01

    Full Text Available The paper presents analysis of the orientation of terrestrial laser scanning (TLS data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  10. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  11. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  12. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  13. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  14. Creating photorealistic virtual model with polarization-based vision system

    Science.gov (United States)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  15. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    Science.gov (United States)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  16. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  17. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  18. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  19. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  20. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The autonomous vision system on TeamSat

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Riis, Troels

    1999-01-01

    The second qualification flight of Ariane 5 blasted off-the European Space Port in French Guiana on October 30, 1997, carrying on board a small technology demonstration satellite called TeamSat. Several experiments were proposed by various universities and research institutions in Europe and five...... of them were finally selected and integrated into TeamSat, namely FIPEX, VTS, YES, ODD and the Autonomous Vision System, AVS, a fully autonomous star tracker and vision system. This paper gives short overview of the TeamSat satellite; design, implementation and mission objectives. AVS is described in more...

  2. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  3. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  4. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  5. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  6. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    Science.gov (United States)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  7. Short-Range Sensor for Underwater Robot Navigation using Line-lasers and Vision

    DEFF Research Database (Denmark)

    Hansen, Peter Nicholas; Nielsen, Mikkel Cornelius; Christensen, David Johan

    2015-01-01

    This paper investigates a minimalistic laser-based range sensor, used for underwater inspection by Autonomous Underwater Vehicles (AUV). This range detection system system comprise two lasers projecting vertical lines, parallel to a camera’s viewing axis, into the environment. Using both lasers...

  8. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev [University of Florida, Gainesville, FL, 32606 (United States)

    2015-07-01

    accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system

  9. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-01-01

    accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system

  10. A Medical Manipulator System with Lasers in Photodynamic Therapy of Port Wine Stains

    Directory of Open Access Journals (Sweden)

    Xingtao Wang

    2014-01-01

    Full Text Available Port wine stains (PWS are a congenital malformation and dilation of the superficial dermal capillary. Photodynamic therapy (PDT with lasers is an effective treatment of PWS with good results. However, because the laser density is uneven and nonuniform, the treatment is carried out manually by a doctor thus providing little accuracy. Additionally, since the treatment of a single lesion can take between 30 and 60 minutes, the doctor can become fatigued after only a few applications. To assist the medical staff with this treatment method, a medical manipulator system (MMS was built to operate the lasers. The manipulator holds the laser fiber and, using a combination of active and passive joints, the fiber can be operated automatically. In addition to the control input from the doctor over a human-computer interface, information from a binocular vision system is used to guide and supervise the operation. Clinical results are compared in nonparametric values between treatments with and without the use of the MMS. The MMS, which can significantly reduce the workload of doctors and improve the uniformity of laser irradiation, was safely and helpfully applied in PDT treatment of PWS with good therapeutic results.

  11. Reviews Equipment: Chameleon Nano Flakes Book: Requiem for a Species Equipment: Laser Sound System Equipment: EasySense VISION Equipment: UV Flash Kit Book: The Demon-Haunted World Book: Nonsense on Stilts Book: How to Think about Weird Things Web Watch

    Science.gov (United States)

    2011-03-01

    WE RECOMMEND Requiem for a Species This book delivers a sober message about climate change Laser Sound System Sound kit is useful for laser demonstrations EasySense VISION Data Harvest produces another easy-to-use data logger UV Flash Kit Useful equipment captures shadows on film The Demon-Haunted World World-famous astronomer attacks pseudoscience in this book Nonsense on Stilts A thought-provoking analysis of hard and soft sciences How to Think about Weird Things This book explores the credibility of astrologers and their ilk WORTH A LOOK Chameleon Nano Flakes Product lacks good instructions and guidelines WEB WATCH Amateur scientists help out researchers with a variety of online projects

  12. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  13. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  14. Laser system using ultra-short laser pulses

    Science.gov (United States)

    Dantus, Marcos [Okemos, MI; Lozovoy, Vadim V [Okemos, MI; Comstock, Matthew [Milford, MI

    2009-10-27

    A laser system using ultrashort laser pulses is provided. In another aspect of the present invention, the system includes a laser, pulse shaper and detection device. A further aspect of the present invention employs a femtosecond laser and binary pulse shaping (BPS). Still another aspect of the present invention uses a laser beam pulse, a pulse shaper and a SHG crystal.

  15. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  16. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  17. Machine-vision based optofluidic cell sorting

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew

    the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the laser catapulted and sorted cells....... machine vision1. This approach is gentler, less invasive and more economical compared to conventional FACS-systems. As cells are less responsive to plastic or glass objects commonly used in the optical manipulation literature2, and since laser safety would be an issue in clinical use, we develop efficient...... approaches in utilizing lasers and light modulation devices. The Generalized Phase Contrast (GPC) method3-9 that can be used for efficiently illuminating spatial light modulators10 or creating well-defined contiguous optical traps11 is supplemented by diffractive techniques capable of integrating...

  18. Robot-laser system

    International Nuclear Information System (INIS)

    Akeel, H.A.

    1987-01-01

    A robot-laser system is described for providing a laser beam at a desired location, the system comprising: a laser beam source; a robot including a plurality of movable parts including a hollow robot arm having a central axis along which the laser source directs the laser beam; at least one mirror for reflecting the laser beam from the source to the desired location, the mirror being mounted within the robot arm to move therewith and relative thereto to about a transverse axis that extends angularly to the central axis of the robot arm; and an automatic programmable control system for automatically moving the mirror about the transverse axis relative to and in synchronization with movement of the robot arm to thereby direct the laser beam to the desired location as the arm is moved

  19. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  20. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  1. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  2. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  3. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    Science.gov (United States)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  4. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  5. Intensity measurement of automotive headlamps using a photometric vision system

    Science.gov (United States)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  6. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  7. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    Science.gov (United States)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  8. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  9. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  10. Laser cutting system

    Science.gov (United States)

    Dougherty, Thomas J

    2015-03-03

    A workpiece cutting apparatus includes a laser source, a first suction system, and a first finger configured to guide a workpiece as it moves past the laser source. The first finger includes a first end provided adjacent a point where a laser from the laser source cuts the workpiece, and the first end of the first finger includes an aperture in fluid communication with the first suction system.

  11. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  12. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    Science.gov (United States)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  13. Gas flow parameters in laser cutting of wood- nozzle design

    Science.gov (United States)

    Kali Mukherjee; Tom Grendzwell; Parwaiz A.A. Khan; Charles McMillin

    1990-01-01

    The Automated Lumber Processing System (ALPS) is an ongoing team research effort to optimize the yield of parts in a furniture rough mill. The process is designed to couple aspects of computer vision, computer optimization of yield, and laser cutting. This research is focused on optimizing laser wood cutting. Laser machining of lumber has the advantage over...

  14. Initial evaluation of a femtosecond laser system in cataract surgery.

    Science.gov (United States)

    Chang, John S M; Chen, Ivan N; Chan, Wai-Man; Ng, Jack C M; Chan, Vincent K C; Law, Antony K P

    2014-01-01

    To report the early experience and complications during cataract surgery with a noncontact femtosecond laser system. Hong Kong Sanatorium and Hospital, Hong Kong Special Administrative Region, China. Retrospective case series. All patients had anterior capsulotomy or combined anterior capsulotomy and lens fragmentation using a noncontact femtosecond laser system (Lensar) before phacoemulsification. Chart and video reviews were performed retrospectively to determine the intraoperative complication rate. Risk factors associated with the complications were also analyzed. One hundred seventy eyes were included. Free-floating capsule buttons were found in 151 eyes (88.8%). No suction break occurred in any case. Radial anterior capsule tears occurred in 9 eyes (5.3%); they did not extend to the equator or posterior capsule. One eye (0.6%) had a posterior capsule tear. No capsular block syndrome developed, and no nuclei were dropped during irrigation/aspiration (I/A). Anterior capsule tags and miosis occurred in 4 eyes (2.4%) and 17 eyes (10.0%), respectively. Different severities of subconjunctival hemorrhages developed in 71 (43.8%) of 162 eyes after the laser procedure. The mean surgical time from the beginning to the end of suction was 6.72 minutes ± 4.57 (SD) (range 2 to 28 minutes). Cataract surgery with the noncontact femtosecond laser system was safe. No eye lost vision because of complications. Caution should be taken during phacoemulsification and I/A to avoid radial anterior capsule tears and posterior capsule tears. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  15. Development of Vision System for Dimensional Measurement for Irradiated Fuel Assembly

    International Nuclear Information System (INIS)

    Shin, Jungcheol; Kwon, Yongbock; Park, Jongyoul; Woo, Sangkyun; Kim, Yonghwan; Jang, Youngki; Choi, Joonhyung; Lee, Kyuseog

    2006-01-01

    In order to develop an advanced nuclear fuel, a series of pool side examination (PSE) is performed to confirm in-pile behavior of the fuel for commercial production. For this purpose, a vision system was developed to measure for mechanical integrity, such as assembly bowing, twist and growth, of the loaded lead test assembly. Using this vision system, three(3) times of PSE were carried out at Uljin Unit 3 and Kori Unit 2 for the advanced fuels, PLUS7 TM and 16ACE7 TM , developed by KNFC. Among the main characteristics of the vision system is very simple structure and measuring principal. This feature enables the equipment installation and inspection time to reduce largely, and leads the PSE can be finished without disturbance on the fuel loading and unloading activities during utility overhaul periods. And another feature is high accuracy and repeatability achieved by this vision system

  16. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  17. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  18. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  19. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  20. Application of Various Lasers to Laser Trimming Resistance System

    Institute of Scientific and Technical Information of China (English)

    SUN Ji-feng

    2007-01-01

    Though the laser trimming resistance has been an old laser machining industry for over 30 years, the development of technology brings new alternative lasers which can be used for the traditional machining. The paper describes application of various lasers to laser trimming resistance system including early traditional krypton arc lamp pumped Nd:YAG to laser, modern popular diode pumped solid state laser and the present advanced harmonic diode pumped solid state laser. Using the new alternative lasers in the laser trimming resistance system can dramatically improve the yields and equipment performance.

  1. High power lasers & systems

    OpenAIRE

    Chatwin, Chris; Young, Rupert; Birch, Philip

    2015-01-01

    Some laser history;\\ud Airborne Laser Testbed & Chemical Oxygen Iodine Laser (COIL);\\ud Laser modes and beam propagation;\\ud Fibre lasers and applications;\\ud US Navy Laser system – NRL 33kW fibre laser;\\ud Lockheed Martin 30kW fibre laser;\\ud Conclusions

  2. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach.

    Science.gov (United States)

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.

  3. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    International Nuclear Information System (INIS)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability

  4. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability.

  5. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  6. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  7. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  8. Laser transmitter system

    International Nuclear Information System (INIS)

    Dye, R.A.

    1975-01-01

    A laser transmitter system is disclosed which utilizes mechanical energy for generating an output pulse. The laser system includes a current developing device such as a piezoelectric crystal which charges a storage device such as a capacitor in response to a mechanical input signal. The capacitor is coupled to a switching device, such as a silicon controlled rectifier (SCR). The switching device is coupled to a laser transmitter such as a GaAs laser diode, which provides an output signal in response to the capacitor being discharged

  9. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  10. Target isolation system, high power laser and laser peening method and system using same

    Science.gov (United States)

    Dane, C. Brent; Hackel, Lloyd A.; Harris, Fritz

    2007-11-06

    A system for applying a laser beam to work pieces, includes a laser system producing a high power output beam. Target delivery optics are arranged to deliver the output beam to a target work piece. A relay telescope having a telescope focal point is placed in the beam path between the laser system and the target delivery optics. The relay telescope relays an image between an image location near the output of the laser system and an image location near the target delivery optics. A baffle is placed at the telescope focal point between the target delivery optics and the laser system to block reflections from the target in the target delivery optics from returning to the laser system and causing damage.

  11. Laser assisted robotic surgery in cornea transplantation

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  12. A flexible 3D laser scanning system using a robotic arm

    Science.gov (United States)

    Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang

    2017-06-01

    In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.

  13. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  14. Surface Casting Defects Inspection Using Vision System and Neural Network Techniques

    Directory of Open Access Journals (Sweden)

    Świłło S.J.

    2013-12-01

    Full Text Available The paper presents a vision based approach and neural network techniques in surface defects inspection and categorization. Depending on part design and processing techniques, castings may develop surface discontinuities such as cracks and pores that greatly influence the material’s properties Since the human visual inspection for the surface is slow and expensive, a computer vision system is an alternative solution for the online inspection. The authors present the developed vision system uses an advanced image processing algorithm based on modified Laplacian of Gaussian edge detection method and advanced lighting system. The defect inspection algorithm consists of several parameters that allow the user to specify the sensitivity level at which he can accept the defects in the casting. In addition to the developed image processing algorithm and vision system apparatus, an advanced learning process has been developed, based on neural network techniques. Finally, as an example three groups of defects were investigated demonstrates automatic selection and categorization of the measured defects, such as blowholes, shrinkage porosity and shrinkage cavity.

  15. The role of vision processing in prosthetic vision.

    Science.gov (United States)

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  16. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  17. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  18. Machine vision system for measuring conifer seedling morphology

    Science.gov (United States)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  19. Automatic diameter control system applied to the laser heated pedestal growth technique

    Directory of Open Access Journals (Sweden)

    Andreeta M.R.B.

    2003-01-01

    Full Text Available We described an automatic diameter control system (ADC, for the laser heated pedestal growth technique, that reduces the diameter fluctuations in oxide fibers grown from unreacted and non-sinterized pedestals, to less than 2% of the average fiber diameter, and diminishes the average diameter fluctuation, over the entire length of the fiber, to less than 1%. The ADC apparatus is based on an artificial vision system that controls the pulling speed and the height of the molten zone within a precision of 30 mum. We also show that this system can be used for periodic in situ axial doping the fiber. Pure and Cr3+ doped LaAlO3 and pure LiNbO3 were usedas model materials.

  20. Powering laser diode systems

    CERN Document Server

    Trestman, Grigoriy A

    2017-01-01

    This Tutorial Text discusses the competent design and skilled use of laser diode drivers (LDDs) and power supplies (PSs) for the electrical components of laser diode systems. It is intended to help power-electronic design engineers during the initial design stages: the choice of the best PS topology, the calculation of parameters and components of the PS circuit, and the computer simulation of the circuit. Readers who use laser diode systems for research, production, and other purposes will also benefit. The book will help readers avoid errors when creating laser systems from ready-made blocks, as well as understand the nature of the "mystical failures" of laser diodes (and possibly prevent them).

  1. Infrared laser system

    International Nuclear Information System (INIS)

    Cantrell, C.D.; Carbone, R.J.

    1977-01-01

    An infrared laser system and method for isotope separation may comprise a molecular gas laser oscillator to produce a laser beam at a first wavelength, Raman spin flip means for shifting the laser to a second wavelength, a molecular gas laser amplifier to amplify said second wavelength laser beam to high power, and optical means for directing the second wavelength, high power laser beam against a desired isotope for selective excitation thereof in a mixture with other isotopes. The optical means may include a medium which shifts the second wavelength high power laser beam to a third wavelength, high power laser beam at a wavelength coincidental with a corresponding vibrational state of said isotope and which is different from vibrational states of other isotopes in the gas mixture

  2. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  3. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  4. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  5. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  6. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  7. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  8. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  9. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  10. A robotic platform for laser welding of corneal tissue

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-07-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  11. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  12. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  13. NASA Laser Communications with Adaptive Optics and Linear Mode Photon Counting, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In this effort, the Optical Sciences Company (tOSC) and Raytheon Vision Systems (RVS) will team to provide NASA with a long range laser communications system for...

  14. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  15. Laser systems for on-line laser ion sources

    International Nuclear Information System (INIS)

    Geppert, Christopher

    2008-01-01

    Since its initiation in the middle of the 1980s, the resonant ionization laser ion source has been established as a reliable and efficient on-line ion source for radioactive ion beams. In comparison to other on-line ion sources it comprises the advantages of high versatility for the elements to be ionized and of high selectivity and purity for the ion beam generated by resonant laser radiation. Dye laser systems have been the predominant and pioneering working horses for laser ion source applications up to recently, but the development of all-solid-state titanium:sapphire laser systems has nowadays initiated a significant evolution within this field. In this paper an overview of the ongoing developments will be given, which have contributed to the establishment of a number of new laser ion source facilities worldwide during the last five years.

  16. Navigated Pattern Laser System versus Single-Spot Laser System for Postoperative 360-Degree Laser Retinopexy.

    Science.gov (United States)

    Kulikov, Alexei N; Maltsev, Dmitrii S; Boiko, Ernest V

    2016-01-01

    Purpose . To compare three 360°-laser retinopexy (LRP) approaches (using navigated pattern laser system, single-spot slit-lamp (SL) laser delivery, and single-spot indirect ophthalmoscope (IO) laser delivery) in regard to procedure duration, procedural pain score, technical difficulties, and the ability to achieve surgical goals. Material and Methods . Eighty-six rhegmatogenous retinal detachment patients (86 eyes) were included in this prospective randomized study. The mean procedural time, procedural pain score (using 4-point Verbal Rating Scale), number of laser burns, and achievement of the surgical goals were compared between three groups (pattern LRP (Navilas® laser system), 36 patients; SL-LRP, 28 patients; and IO-LRP, 22 patients). Results . In the pattern LRP group, the amount of time needed for LRP and pain level were statistically significantly lower, whereas the number of applied laser burns was higher compared to those in the SL-LRP group and in the IO-LRP group. In the pattern LRP, SL-LRP, and IO-LRP groups, surgical goals were fully achieved in 28 (77.8%), 17 (60.7%), and 13 patients (59.1%), respectively ( p > 0.05). Conclusion . The navigated pattern approach allows improving the treatment time and pain in postoperative 360° LRP. Moreover, 360° pattern LRP is at least as effective in achieving the surgical goal as the conventional (slit-lamp or indirect ophthalmoscope) approaches with a single-spot laser.

  17. Color Calibration for Colorized Vision System with Digital Sensor and LED Array Illuminator

    Directory of Open Access Journals (Sweden)

    Zhenmin Zhu

    2016-01-01

    Full Text Available Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIE  L⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.

  18. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  19. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    Science.gov (United States)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  20. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  1. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  2. Modelling and Analysis of Vibrations in a UAV Helicopter with a Vision System

    Directory of Open Access Journals (Sweden)

    G. Nicolás Marichal Plasencia

    2012-11-01

    Full Text Available The analysis of the nature and damping of unwanted vibrations on Unmanned Aerial Vehicle (UAV helicopters are important tasks when images from on-board vision systems are to be obtained. In this article, the authors model a UAV system, generate a range of vibrations originating in the main rotor and design a control methodology in order to damp these vibrations. The UAV is modelled using VehicleSim, the vibrations that appear on the fuselage are analysed to study their effects on the on-board vision system by using Simmechanics software. Following this, the authors present a control method based on an Adaptive Neuro-Fuzzy Inference System (ANFIS to achieve satisfactory damping results over the vision system on board.

  3. Laser Megajoule synchronization system

    International Nuclear Information System (INIS)

    Luttmann, M.; Pastor, J.F; Drouet, V.; Prat, M.; Raimbourg, J.; Adolf, A.

    2011-01-01

    This paper describes the synchronisation system under development on the Laser Megajoule (LMJ) in order to synchronize the laser quads on the target to better than 40 ps rms. Our architecture is based on a Timing System (TS) which delivers trigger signals with jitter down to 15 ps rms coupled with an ultra precision timing system with 5 ps rms jitter. In addition to TS, a sensor placed at the target chamber center measures the arrival times of the 3 omega nano joule laser pulses generated by front end shots. (authors)

  4. Fiscal 1998 achievement report on regional consortium research and development project. Venture business fostering regional consortium--Creation of key industries (Development of Task-Oriented Robot Control System TORCS based on versatile 3-dimensional vision system VVV--Vertical Volumetric Vision); 1998 nendo sanjigen shikaku system VVV wo mochiita task shikogata robot seigyo system TORCS no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research is conducted for the development of a highly autonomous robot control system TORCS for the purpose of realizing an automated, unattended manufacturing process. In the development of an interface, an indicating function is built which easily adds or removes job attributes relative to given shape data. In the development of a 3-dimensional vision system VVV, a camera set and a new range finder are manufactured for ranging and recognition, the latter being an improvement from the conventional laser-aided range finder TDS. A 3-dimensional image processor is developed, which picks up pictures at a speed approximately 8 times higher than that of the conventional type. In the development of orbit calculating software programs, a job planner, an operation planner, and a vision planner are prepared. A robot program which is necessary for robot operation is also prepared. In an evaluation test involving a simulated casting line, the pick-and-place concept is successfully implemented for several kinds of cast articles positioned at random on a conveyer in motion. Difference in environmental conditions between manufacturing sites is not pursued in this paper on the ground that such should be discussed on the case-by-case basis. (NEDO)

  5. Development of automatic laser welding system

    International Nuclear Information System (INIS)

    Ohwaki, Katsura

    2002-01-01

    Laser are a new production tool for high speed and low distortion welding and applications to automatic welding lines are increasing. IHI has long experience of laser processing for the preservation of nuclear power plants, welding of airplane engines and so on. Moreover, YAG laser oscillators and various kinds of hardware have been developed for laser welding and automation. Combining these welding technologies and laser hardware technologies produce the automatic laser welding system. In this paper, the component technologies are described, including combined optics intended to improve welding stability, laser oscillators, monitoring system, seam tracking system and so on. (author)

  6. Background staining of visualization systems in immunohistochemistry: comparison of the Avidin-Biotin Complex system and the EnVision+ system.

    Science.gov (United States)

    Vosse, Bettine A H; Seelentag, Walter; Bachmann, Astrid; Bosman, Fred T; Yan, Pu

    2007-03-01

    The aim of this study was to evaluate specific immunostaining and background staining in formalin-fixed, paraffin-embedded human tissues with the 2 most frequently used immunohistochemical detection systems, Avidin-Biotin-Peroxidase (ABC) and EnVision+. A series of fixed tissues, including breast, colon, kidney, larynx, liver, lung, ovary, pancreas, prostate, stomach, and tonsil, was used in the study. Three monoclonal antibodies, 1 against a nuclear antigen (Ki-67), 1 against a cytoplasmic antigen (cytokeratin), and 1 against a cytoplasmic and membrane-associated antigen and a polyclonal antibody against a nuclear and cytoplasmic antigen (S-100) were selected for these studies. When the ABC system was applied, immunostaining was performed with and without blocking of endogenous avidin-binding activity. The intensity of specific immunostaining and the percentage of stained cells were comparable for the 2 detection systems. The use of ABC caused widespread cytoplasmic and rare nuclear background staining in a variety of normal and tumor cells. A very strong background staining was observed in colon, gastric mucosa, liver, and kidney. Blocking avidin-binding capacity reduced background staining, but complete blocking was difficult to attain. With the EnVision+ system no background staining occurred. Given the efficiency of the detection, equal for both systems or higher with EnVision+, and the significant background problem with ABC, we advocate the routine use of the EnVision+ system.

  7. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  8. Laser spark distribution and ignition system

    Science.gov (United States)

    Woodruff, Steven [Morgantown, WV; McIntyre, Dustin L [Morgantown, WV

    2008-09-02

    A laser spark distribution and ignition system that reduces the high power optical requirements for use in a laser ignition and distribution system allowing for the use of optical fibers for delivering the low peak energy pumping pulses to a laser amplifier or laser oscillator. An optical distributor distributes and delivers optical pumping energy from an optical pumping source to multiple combustion chambers incorporating laser oscillators or laser amplifiers for inducing a laser spark within a combustion chamber. The optical distributor preferably includes a single rotating mirror or lens which deflects the optical pumping energy from the axis of rotation and into a plurality of distinct optical fibers each connected to a respective laser media or amplifier coupled to an associated combustion chamber. The laser spark generators preferably produce a high peak power laser spark, from a single low power pulse. The laser spark distribution and ignition system has application in natural gas fueled reciprocating engines, turbine combustors, explosives and laser induced breakdown spectroscopy diagnostic sensors.

  9. Using range vision for telerobotic control in hazardous environments

    International Nuclear Information System (INIS)

    Lipsett, M.G.; Ballantyne, W.J.

    1996-01-01

    This paper describes how range vision augments a telerobotic system. The robot has a manipulator arm mounted onto a mobile platform. The robot is driven by a human operator under remote control to a work site, and then the operator uses video cameras and laser range images to perform manipulation tasks. A graphical workstation displays a three-dimensional image of the workspace to the operator, and a CAD model of the manipulator moves in this 'virtual environment' while the actual manipulator moves in the real workspace. This paper gives results of field trials of a remote excavation system, and describes a remote inspection system being developed for reactor maintenance. (author)

  10. On-Chip Laser-Power Delivery System for Dielectric Laser Accelerators

    Science.gov (United States)

    Hughes, Tyler W.; Tan, Si; Zhao, Zhexin; Sapra, Neil V.; Leedle, Kenneth J.; Deng, Huiyang; Miao, Yu; Black, Dylan S.; Solgaard, Olav; Harris, James S.; Vuckovic, Jelena; Byer, Robert L.; Fan, Shanhui; England, R. Joel; Lee, Yun Jo; Qi, Minghao

    2018-05-01

    We propose an on-chip optical-power delivery system for dielectric laser accelerators based on a fractal "tree-network" dielectric waveguide geometry. This system replaces experimentally demanding free-space manipulations of the driving laser beam with chip-integrated techniques based on precise nanofabrication, enabling access to orders-of-magnitude increases in the interaction length and total energy gain for these miniature accelerators. Based on computational modeling, in the relativistic regime, our laser delivery system is estimated to provide 21 keV of energy gain over an acceleration length of 192 μ m with a single laser input, corresponding to a 108-MV/m acceleration gradient. The system may achieve 1 MeV of energy gain over a distance of less than 1 cm by sequentially illuminating 49 identical structures. These findings are verified by detailed numerical simulation and modeling of the subcomponents, and we provide a discussion of the main constraints, challenges, and relevant parameters with regard to on-chip laser coupling for dielectric laser accelerators.

  11. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  12. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  13. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    Science.gov (United States)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  14. CO laser angioplasty system: efficacy of manipulatable laser angioscope catheter

    Science.gov (United States)

    Arai, Tsunenori; Kikuchi, Makoto; Mizuno, Kyoichi; Sakurada, Masami; Miyamoto, Akira; Arakawa, Koh; Kurita, Akira; Nakamura, Haruo; Takeuchi, Kiyoshi; Utsumi, Atsushi; Akai, Yoshiro

    1992-08-01

    A percutaneous transluminal coronary angioplasty system using a unique combination of CO laser (5 micrometers ) and As-S infrared glass fiber under the guidance of a manipulatable laser angioscope catheter is described. The ablation and guidance functions of this system are evaluated. The angioplasty treatment procedure under angioscope guidance was studied by in vitro model experiment and in vivo animal experiment. The whole angioplasty system is newly developed. That is, a transportable compact medical CO laser device which can emit up to 10 W, a 5 F manipulatable laser angioscope catheter, a thin CO laser cable of which the diameter is 0.6 mm, an angioscope imaging system for laser ablation guidance, and a system controller were developed. Anesthetized adult mongrel dogs (n equals 5) with an artificial complete occlusion in the femoral artery and an artificial human vessel model including occluded or stenotic coronary artery were used. The manipulatability of the catheter was drastically improved (both rotation and bending), therefore, precise control of ablation to expand stenosis was obtained. A 90% artificial stenosis made of human yellow plaque in 4.0 mm diameter in the vessel was expanded to 70% stenosis by repetitive CO laser ablations of which total energy was 220 J. All procedures were performed and controlled under angioscope visualization.

  15. Vision and dual IMU integrated attitude measurement system

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  16. National Ignition Facility system design requirements Laser System SDR002

    International Nuclear Information System (INIS)

    Larson, D.W.; Bowers, J.M.; Bliss, E.S.; Karpenko, V.P.; English, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIP Laser System. The Laser System generates and delivers high-power optical pulses to the target chamber, and is composed of all optical puke creating and transport elements from Puke Generation through Final Optics as well as the special equipment that supports, energizes and controls them. The Laser System consists of the following WBS elements: 1.3 Laser System 1.4 Beam Transport System 1.6 Optical Components 1.7 Laser Control 1.8.7 Final Optics

  17. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  18. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  19. A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Science.gov (United States)

    Feng, Dongming; Feng, Maria Q.; Ozer, Ekin; Fukuda, Yoshio

    2015-01-01

    Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement. PMID:26184197

  20. Laser surveillance system (LASSY)

    International Nuclear Information System (INIS)

    Boeck, H.

    1991-09-01

    Laser Surveillance System (LASSY) is a beam of laser light which scans a plane above the water or under-water in a spent-fuel pond. The system can detect different objects and estimates its coordinates and distance as well. LASSY can operate in stand-alone configuration or in combination with a video surveillance to trigger signal to a videorecorder. The recorded information on LASSY computer's disk comprises date, time, start and stop angle of detected alarm, the size of the disturbance indicated in number of deviated points and some other information. The information given by the laser system cannot be fully substituted by TV camera pictures since the scanning beam creates a horizontal surveillance plan. The engineered prototype laser system long-term field test has been carried out in Soluggia (Italy) and has shown its feasibility and reliability under the conditions of real spent fuel storage pond. The verification of the alarm table on the LASSY computer with the recorded video pictures of TV surveillance system confirmed that all alarm situations have been detected. 5 refs

  1. Aurora laser optical system

    International Nuclear Information System (INIS)

    Hanlon, J.A.; McLeod, J.

    1987-01-01

    Aurora is the Los Alamos short-pulse high-power krypton fluoride laser system. It is primarily an end-to-end technology demonstration prototype for large-scale UV laser systems of interest for short-wavelength inertial confinement fusion (ICF) investigations. The system is designed to employ optical angular multiplexing and aerial amplification by electron-beam-driven KrF laser amplifiers to deliver to ICF targets a stack of pulses with a duration of 5 ns containing several kilojoules at a wavelength of 248 nm. A program of high-energy density plasma physics investigations is now planned, and a sophisticated target chamber was constructed. The authors describe the design of the optical system for Aurora and report its status. This optical system was designed and is being constructed in two phases. The first phase carries only through the amplifier train and does not include a target chamber or any demultiplexing. Installation should be complete, and some performance results should be available. The second phase provides demultiplexing and carries the laser light to target. The complete design is reported

  2. Measuring the Contribution of Atmospheric Scatter to Laser Eye Dazzle

    Science.gov (United States)

    2015-09-01

    lasers; (140.3360) Laser safety and eye protection; (290.5820) Scattering measurements; (330.4060) Vision modeling; (330.4595) Optical effects on... vision . http://dx.doi.org/10.1364/A0.54.007567 1. INTRODUCTION Laser eye dazzle is the temporary visual obscurat ion caused by visible wavelength laser...2003). 6. P. Padmos, “Glare and tunnel entrance lighting: effects of stray light from eye, atmosphere and windscreen,” CIE J. 3, 1–24 (1984). 7. W. C

  3. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  4. Recent laser experiments on the Aurora KrF/ICF laser system

    International Nuclear Information System (INIS)

    Turner, T.P.; Jones, J.E.; Czuchlewski, S.J.; Watt, R.G.; Thomas, S.J.; Kang, M.; Tallman, C.R.; Mack, J.M.; Figueira, J.F.

    1990-01-01

    The Aurora KrF/ICF Laser Facility at Los Alamos is operational at the kilojoule-level for both laser and target experiments. We report on recent laser experiments on the system and resulting system improvements. 3 refs., 4 figs

  5. Vision and laterality: does occlusion disclose a feedback processing advantage for the right hand system?

    Science.gov (United States)

    Buekers, M J; Helsen, W F

    2000-09-01

    The main purpose of this study was to examine whether manual asymmetries could be related to the superiority of the left hemisphere/right hand system in processing visual feedback. Subjects were tested when performing single (Experiment 1) and reciprocal (Experiment 2) aiming movements under different vision conditions (full vision, 20 ms on/180 ms off, 10/90, 40/160, 20/80, 60/120, 20/40). Although in both experiments right hand advantages were found, manual asymmetries did not interact with intermittent vision conditions. Similar patterns of results were found across vision conditions for both hands. These data do not support the visual feedback processing hypothesis of manual asymmetry. Motor performance is affected to the same extent for both hand systems when vision is degraded.

  6. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    International Nuclear Information System (INIS)

    Lee, Inho; Oh, Jaesung; Oh, Jun-Ho; Kim, Inhyeok

    2017-01-01

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  7. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  8. Some characteristics of isotopic separation laser systems

    International Nuclear Information System (INIS)

    Pochon, E.

    1988-01-01

    The principle of Laser Isotope Separation (LIS) is simple and based on either selective electronic photoexcitation and photoionization of atomic vapor, or selective vibrational photoexcitation and photodissociation of molecules in the gas phase. These processes, respectively called SILVA (AVLIS) and SILMO (MLIS) in France, both use specific laser systems with wavelengths spanning from infrared to ultraviolet. This article describes briefly some of the characteristics of a SILVA laser system. Following a three-step process, a SILVA laser system is based on dye copper vapor lasers. The pulse dye lasers provide the tunable laser light and are optically pumped by copper vapor laser operating at high repetition rates. In order to meet plant laser system requirements, the main improvements under way relate to copper vapor laser devices the power capability, efficiency, reliability and lifetime of which have to be increased. 1 fig

  9. Integration and coordination in a cognitive vision system

    OpenAIRE

    Wrede, Sebastian; Hanheide, Marc; Wachsmuth, Sven; Sagerer, Gerhard

    2006-01-01

    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information tha...

  10. Multi-terawatt fusion laser systems

    International Nuclear Information System (INIS)

    Holzrichter, J.F.

    1993-01-01

    The evolution of laser fusion systems started with a description of the basic principles of the laser in 1959, then a physical demonstration showing 1000 Watts of peak optical power in 1961 to the present systems that deliver 10 14 watts of peak optical power, are presented. Physical limits to large systems are reviewed: thermal limits, material stress limits, structural limits and stability, parasitic coupling, measurement precision and diagnostics. The various steps of the fusion laser-system development process are then discussed through an historical presentation. 3 figs., 8 refs

  11. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  12. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip [University of Florida, Gainesville, FL 32611 (United States)

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  13. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    International Nuclear Information System (INIS)

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  14. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  15. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  16. Laser engineering of microbial systems

    Science.gov (United States)

    Yusupov, V. I.; Gorlenko, M. V.; Cheptsov, V. S.; Minaev, N. V.; Churbanova, E. S.; Zhigarkov, V. S.; Chutko, E. A.; Evlashin, S. A.; Chichkov, B. N.; Bagratashvili, V. N.

    2018-06-01

    A technology of laser engineering of microbial systems (LEMS) based on the method of laser-induced transfer of heterogeneous mixtures containing microorganisms (laser bioprinting) is described. This technology involves laser printing of soil microparticles by focusing near-infrared laser pulses on a specially prepared gel/soil mixture spread onto a gold-coated glass plate. The optimal range of laser energies from the point of view of the formation of stable jets and droplets with minimal negative impact on living systems of giant accelerations, laser pulse irradiation, and Au nanoparticles was found. Microsamples of soil were printed on glucose-peptone-yeast agar plates to estimate the LEMS process influence on structural and morphological microbial diversity. The obtained results were compared with traditionally treated soil samples. It was shown that LEMS technology allows significantly increasing the biodiversity of printed organisms and is effective for isolating rare or unculturable microorganisms.

  17. Vision and Displays for Military and Security Applications The Advanced Deployable Day/Night Simulation Project

    CERN Document Server

    Niall, Keith K

    2010-01-01

    Vision and Displays for Military and Security Applications presents recent advances in projection technologies and associated simulation technologies for military and security applications. Specifically, this book covers night vision simulation, semi-automated methods in photogrammetry, and the development and evaluation of high-resolution laser projection technologies for simulation. Topics covered include: advances in high-resolution projection, advances in image generation, geographic modeling, and LIDAR imaging, as well as human factors research for daylight simulation and for night vision devices. This title is ideal for optical engineers, simulator users and manufacturers, geomatics specialists, human factors researchers, and for engineers working with high-resolution display systems. It describes leading-edge methods for human factors research, and it describes the manufacture and evaluation of ultra-high resolution displays to provide unprecedented pixel density in visual simulation.

  18. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  19. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  20. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    Science.gov (United States)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  1. The Theory of Random Laser Systems

    International Nuclear Information System (INIS)

    Xunya Jiang

    2002-01-01

    Studies of random laser systems are a new direction with promising potential applications and theoretical interest. The research is based on the theories of localization and laser physics. So far, the research shows that there are random lasing modes inside the systems which is quite different from the common laser systems. From the properties of the random lasing modes, they can understand the phenomena observed in the experiments, such as multi-peak and anisotropic spectrum, lasing mode number saturation, mode competition and dynamic processes, etc. To summarize, this dissertation has contributed the following in the study of random laser systems: (1) by comparing the Lamb theory with the Letokhov theory, the general formulas of the threshold length or gain of random laser systems were obtained; (2) they pointed out the vital weakness of previous time-independent methods in random laser research; (3) a new model which includes the FDTD method and the semi-classical laser theory. The solutions of this model provided an explanation of the experimental results of multi-peak and anisotropic emission spectra, predicted the saturation of lasing modes number and the length of localized lasing modes; (4) theoretical (Lamb theory) and numerical (FDTD and transfer-matrix calculation) studies of the origin of localized lasing modes in the random laser systems; and (5) proposal of using random lasing modes as a new path to study wave localization in random systems and prediction of the lasing threshold discontinuity at mobility edge

  2. The GEO 600 laser system

    CERN Document Server

    Zawischa, I; Danzmann, K; Fallnich, C; Heurs, M; Nagano, S; Quetschke, V; Welling, H; Willke, B

    2002-01-01

    Interferometric gravitational wave detectors require high optical power, single frequency lasers with very good beam quality and high amplitude and frequency stability as well as high long-term reliability as input light source. For GEO 600 a laser system with these properties is realized by a stable planar, longitudinally pumped 12 W Nd:YAG rod laser which is injection-locked to a monolithic 800 mW Nd:YAG non-planar ring oscillator. Frequency control signals from the mode cleaners are fed to the actuators of the non-planar ring oscillator which determines the frequency stability of the system. The system power stabilization acts on the slave laser pump diodes which have the largest influence on the output power. In order to gain more output power, a combined Nd:YAG-Nd:YVO sub 4 system is scaled to more than 22 W.

  3. Different lasers and techniques for proliferative diabetic retinopathy.

    Science.gov (United States)

    Moutray, Tanya; Evans, Jennifer R; Lois, Noemi; Armstrong, David J; Peto, Tunde; Azuara-Blanco, Augusto

    2018-03-15

    Diabetic retinopathy (DR) is a chronic progressive disease of the retinal microvasculature associated with prolonged hyperglycaemia. Proliferative DR (PDR) is a sight-threatening complication of DR and is characterised by the development of abnormal new vessels in the retina, optic nerve head or anterior segment of the eye. Argon laser photocoagulation has been the gold standard for the treatment of PDR for many years, using regimens evaluated by the Early Treatment of Diabetic Retinopathy Study (ETDRS). Over the years, there have been modifications of the technique and introduction of new laser technologies. To assess the effects of different types of laser, other than argon laser, and different laser protocols, other than those established by the ETDRS, for the treatment of PDR. We compared different wavelengths; power and pulse duration; pattern, number and location of burns versus standard argon laser undertaken as specified by the ETDRS. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 8 June 2017. We included randomised controlled trials (RCTs) of pan-retinal photocoagulation (PRP) using standard argon laser for treatment of PDR compared with any other laser modality. We excluded studies of lasers that are not in common use, such as the xenon arc, ruby or Krypton laser. We followed Cochrane guidelines and graded the certainty of evidence using the GRADE approach. We identified 11 studies from Europe (6), the USA (2), the Middle East (1) and Asia (2). Five studies compared different types of laser to argon: Nd:YAG (2 studies) or diode (3 studies). Other studies compared modifications to the standard argon laser PRP technique. The studies were poorly reported and we judged all to be at high risk of bias in at least one domain. The sample size

  4. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  5. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  6. ISOLDE gets a new laser system

    CERN Multimedia

    Anaïs Schaeffer

    2011-01-01

    It's action stations at ISOLDE, the On-Line Isotope Mass Separator at CERN. The Laboratory is preparing to add a second laser ion source system to its arsenal. By alternating between two laser systems, the ISOLDE team will be able to switch from one type of beam to another in record time.   Bruce Marsh, from the EN-STI Group, with one of the lasers from ISOLDE's current system. The first laser source for producing radioactive ion beams (see box) was installed in the ISOLDE hall in the 1990s. This method, which was highly innovative for its time, has since been adopted by several laboratories all over the world. "This laser system allows us to control the ionisation wavelength with precision and thus to select specific atoms in order to produce very pure radioactive ion beams," explains Valentin Fedosseev of the EN Department. "These beams are then used for various experiments, in nuclear astrophysics and biology, for example. With two laser systems we will be able to do ...

  7. Multiparameter thermo-mechanical OCT-based characterization of laser-induced cornea reshaping

    Science.gov (United States)

    Zaitsev, Vladimir Yu.; Matveyev, Alexandr L.; Matveev, Lev A.; Gelikonov, Grigory V.; Vitkin, Alex; Omelchenko, Alexander I.; Baum, Olga I.; Shabanov, Dmitry V.; Sovetsky, Alexander A.; Sobol, Emil N.

    2017-02-01

    Phase-sensitive optical coherence tomography (OCT) is used for visualizing dynamic and cumulative strains and corneashape changes during laser-produced tissue heating. Such non-destructive (non-ablative) cornea reshaping can be used as a basis of emerging technologies of laser vision correction. In experiments with cartilaginous samples, polyacrilamide phantoms and excised rabbit eyes we demonstrate ability of the developed OCT system to simultaneously characterize transient and cumulated strain distributions, surface displacements, scattering tissue properties and possibility of temperature estimation via thermal-expansion measurements. The proposed approach can be implemented in perspective real-time OCT systems for ensuring safety of new methods of laser reshaping of cornea.

  8. Vision sensing techniques in aeronautics and astronautics

    Science.gov (United States)

    Hall, E. L.

    1988-01-01

    The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.

  9. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  10. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  11. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    International Nuclear Information System (INIS)

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  12. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  13. Laser surveillance system for spent fuel

    International Nuclear Information System (INIS)

    Fiarman, S.; Zucker, M.S.; Bieber, A.M. Jr.

    1980-01-01

    A laser surveillance system installed at spent fuel storage pools (SFSP's) will provide the safeguard inspector with specific knowledge of spent fuel movement that cannot be obtained with current surveillance systems. The laser system will allow for the division of the pool's spent fuel inventory into two populations - those assemblies which have been moved and those which haven't - which is essential for maximizing the efficiency and effectiveness of the inspection effort. We have designed, constructed, and tested a full size laser system operating in air and have used an array of 6 zircaloy BWR tubes to simulate an assembly. The reflective signal from the zircaloy rods is a strong function of position of the assembly, but in all cases is easily discernable from the reference scan of the background with no assembly. A design for a SFSP laser surveillance system incorporating laser ranging is discussed. 10 figures

  14. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  15. Fluorescence-pumped photolytic gas laser system for a commercial laser fusion power plant

    International Nuclear Information System (INIS)

    Monsler, M.J.

    1977-01-01

    The first results are given for the conceptual design of a short-wavelength gas laser system suitable for use as a driver (high average power ignition source) for a commercial laser fusion power plant. A comparison of projected overall system efficiencies of photolytically excited oxygen, sulfur, selenium and iodine lasers is described, using a unique windowless laser cavity geometry which will allow scaling of single amplifier modules to 125 kJ per aperture for 1 ns pulses. On the basis of highest projected overall efficiency, a selenium laser is chosen for a conceptual power plant fusion laser system. This laser operates on the 489 nm transauroral transition of selenium, excited by photolytic dissociation of COSe by ultraviolet fluorescence radiation. Power balances and relative costs for optics, electrical power conditioning and flow conditioning of both the laser and fluorescer gas streams are discussed for a system with the following characteristics: 8 operating modules, 2 standby modules, 125 kJ per module, 1.4 pulses per second, 1.4 MW total average power. The technical issues of scaling visible and near-infrared photolytic gas laser systems to this size are discussed

  16. A vision fusion treatment system based on ATtiny26L

    Science.gov (United States)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  17. Nova laser alignment control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J.; Holloway, F.W.; McGuigan, D.L.; Shelton, R.T.

    1984-01-01

    Alignment of the Nova laser requires control of hundreds of optical components in the ten beam paths. Extensive application of computer technology makes daily alignment practical. The control system is designed in a manner which provides both centralized and local manual operator controls integrated with automatic closed loop alignment. Menudriven operator consoles using high resolution color graphics displays overlaid with transport touch panels allow laser personnel to interact efficiently with the computer system. Automatic alignment is accomplished by using image analysis techniques to determine beam references points from video images acquired along the laser chain. A major goal of the design is to contribute substantially to rapid experimental turnaround and consistent alignment results. This paper describes the computer-based control structure and the software methods developed for aligning this large laser system

  18. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  19. Laser experimental system as teaching aid for demonstrating basic phenomena of laser feedback

    International Nuclear Information System (INIS)

    Xu, Ling; Zhao, Shijie; Zhang, Shulian

    2015-01-01

    An experimental laser teaching system is developed to demonstrate laser feedback phenomena, which bring great harm to optical communication and benefits to precision measurement. The system consists of an orthogonally polarized He-Ne laser, a feedback mirror which reflects the laser output light into the laser cavity, and an optical attenuator which changes the intensity of the feedback light. As the feedback mirror is driven by a piezoelectric ceramic, the attenuator is adjusted and the feedback mirror is tilted, the system can demonstrate many basic laser feedback phenomena, including weak, moderate and strong optical feedback, multiple feedback and polarization flipping. Demonstrations of these phenomena can give students a better understanding about the intensity and polarization of lasers. The system is well designed and assembled, simple to operate, and provides a valuable teaching aid at an undergraduate level. (paper)

  20. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  1. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  2. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  3. Recent advances in the development and transfer of machine vision technologies for space

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  4. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  5. Present and future of vision systems technologies in commercial flight operations

    Science.gov (United States)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  6. Dynamically variable spot size laser system

    Science.gov (United States)

    Gradl, Paul R. (Inventor); Hurst, John F. (Inventor); Middleton, James R. (Inventor)

    2012-01-01

    A Dynamically Variable Spot Size (DVSS) laser system for bonding metal components includes an elongated housing containing a light entry aperture coupled to a laser beam transmission cable and a light exit aperture. A plurality of lenses contained within the housing focus a laser beam from the light entry aperture through the light exit aperture. The lenses may be dynamically adjusted to vary the spot size of the laser. A plurality of interoperable safety devices, including a manually depressible interlock switch, an internal proximity sensor, a remotely operated potentiometer, a remotely activated toggle and a power supply interlock, prevent activation of the laser and DVSS laser system if each safety device does not provide a closed circuit. The remotely operated potentiometer also provides continuous variability in laser energy output.

  7. Laser illumination and EO systems for covert surveillance from NIR to SWIR and beyond

    Science.gov (United States)

    Dvinelis, Edgaras; Žukauskas, Tomas; Kaušylas, Mindaugas; Vizbaras, Augustinas; Vizbaras, Kristijonas; Vizbaras, Dominykas

    2016-10-01

    One of the most important factor of success in battlefield is the ability to remain undetected by the opposing forces while also having an ability to detect all possible threats. Illumination and pointing systems working in NIR and SWIR bands are presented. Wavelengths up to 1100 nm can be registered by newest generation image intensifier tubes, CCD and EMCCD sensors. Image intensifier tubes of generation III or older are only limited up to wavelength of 900 nm [1]. Longer wavelengths of 1550 nm and 1625 nm are designed to be used with SWIR electro-optical systems and they cannot be detected by any standard night vision system. Long range SWIR illuminators and pointers have beam divergences down to 1 mrad and optical powers up to 1.5 W. Due to lower atmospheric scattering SWIR illuminators and pointers can be used at extremely long distances up to 10s of km and even further during heavy weather conditions. Longer wavelengths of 2100 nm and 2450 nm are also presented, this spectrum band is of great interest for direct infrared countermeasure (DIRCM) applications. State-of-the-art SWIR and LWIR electro-optical systems are presented. Sensitive InGaAs sensors coupled with "fast" (low F/#) optical lenses can provide complete night vision, detection of all NIR and SWIR laser lines, penetration through smoke, dust and fog. Finally beyond-state-of-the-art uncooled micro-bolometer LWIR systems are presented featuring ultra-high sensor sensitivities of 20 mK.

  8. Novel compact panomorph lens based vision system for monitoring around a vehicle

    Science.gov (United States)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  9. Laser and plasma diagnostics for the OMEGA Upgrade Laser System (invited) (abstract)

    International Nuclear Information System (INIS)

    Letzring, S.A.

    1995-01-01

    The upgraded OMEGA laser system will be capable of delivering up to 30 kJ of 351-nm laser light with various temporal pulse shapes onto a variety of targets for both ICF and basic plasma physics experiments. ICF experiments will cover a wide parameter space up to near-ignition conditions, and basic interaction and plasma physics experiments will cover previously unattainable parameter spaces. The laser system is the tool with which the experiments are performed; the diagnostics, both of the laser system and the interaction between the laser and the target, form the heart of the experiment. A new suite of diagnostics is now being designed and constructed. Most of these are based on diagnostics previously fielded on the OMEGA laser system very successfully over the last ten years, but there are some new diagnostics, both for the laser and the interaction experiments, which have had to be invented. Laser system diagnostics include high-energy, full-beam calorimetry for all of the 60 beams of the upgrade; a novel, multispectral energy-measuring system for assessing the tuning of the frequency-multiplying crystals; a beam-balance diagnostic that forms the heart of the energy-balance system; and a peak power diagnostic that forms the heart of the power-balance system. Target diagnostics will include the usual time-integrated x-ray imaging systems, both pinhole cameras and x-ray microscopes; x-ray spectrometers, both imaging and spatially integrating; plamsa calorimeters, including x-ray calorimetry; and time-resolved x-ray diagnostics, both nonimaging and imaging in one and two dimensions. Neutron diagnostics will include several measurements of total yield, secondary, and possibly tertiary yield and neutron spectroscopy with several time-of-flight spectrometers. Other measurements will include ''knock-on'' particle measurements and neutron activation of shell materials as a diagnostic of compressed fuel and shell density

  10. Portable electronic vision enhancement systems in comparison with optical magnifiers for near vision activities: an economic evaluation alongside a randomized crossover trial.

    Science.gov (United States)

    Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T

    2017-08-01

    To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.

  11. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  12. Shiva laser system performance

    International Nuclear Information System (INIS)

    Glaze, J.; Godwin, R.O.; Holzrichter, J.F.

    1978-01-01

    On November 18, 1977, after four years of experimentation, innovation, and construction, the Shiva High Energy Laser facility produced 10.2 kJ of focusable laser energy delivered in a 0.95 ns pulse. The Shiva laser, with its computer control system and delta amplifiers, demonstrated its versatility on May 18, 1978, when the first 20-beam target shot with delta amplifiers focused 26 TW on a target and produced a yield of 7.5 x 10 9 neutrons

  13. Vision system for diagnostic task | Merad | Global Journal of Pure ...

    African Journals Online (AJOL)

    Due to environment degraded conditions, direct measurements are not possible. ... Degraded conditions: vibrations, water and chip of metal projections, ... Before tooling, the vision system has to answer: “is it the right piece at the right place?

  14. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  15. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  16. Laser projection positioning of spatial contour curves via a galvanometric scanner

    Science.gov (United States)

    Tu, Junchao; Zhang, Liyan

    2018-04-01

    The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.

  17. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final...Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire , known as T2-CAM for Tire -Terrain CAMera. The T2-CAM system

  18. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  19. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  20. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  1. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  2. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  3. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4...

  4. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  5. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  6. Biofeedback for Better Vision

    Science.gov (United States)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  7. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.

    Science.gov (United States)

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2015-07-01

    Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical

  8. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  9. High-performance OPCPA laser system

    International Nuclear Information System (INIS)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  10. High-performance OPCPA laser system

    Energy Technology Data Exchange (ETDEWEB)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  11. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  12. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  13. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Bjorholm; Jensen, Kirsten

    2015-01-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance...... are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods...... accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. (C) 2014 Elsevier Ltd. All rights reserved....

  14. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  15. ARGOS laser system mechanical design

    Science.gov (United States)

    Deysenroth, M.; Honsberg, M.; Gemperlein, H.; Ziegleder, J.; Raab, W.; Rabien, S.; Barl, L.; Gässler, W.; Borelli, J. L.

    2014-07-01

    ARGOS, a multi-star adaptive optics system is designed for the wide-field imager and multi-object spectrograph LUCI on the LBT (Large Binocular Telescope). Based on Rayleigh scattering the laser constellation images 3 artificial stars (at 532 nm) per each of the 2 eyes of the LBT, focused at a height of 12 km (Ground Layer Adaptive Optics). The stars are nominally positioned on a circle 2' in radius, but each star can be moved by up to 0.5' in any direction. For all of these needs are following main subsystems necessary: 1. A laser system with its 3 Lasers (Nd:YAG ~18W each) for delivering strong collimated light as for LGS indispensable. 2. The Launch system to project 3 beams per main mirror as a 40 cm telescope to the sky. 3. The Wave Front Sensor with a dichroic mirror. 4. The dichroic mirror unit to grab and interpret the data. 5. A Calibration Unit to adjust the system independently also during day time. 6. Racks + platforms for the WFS units. 7. Platforms and ladders for a secure access. This paper should mainly demonstrate how the ARGOS Laser System is configured and designed to support all other systems.

  16. Optical system for UV-laser technological equipment

    Science.gov (United States)

    Fedosov, Yuri V.; Romanova, Galina E.; Afanasev, Maxim Ya.

    2017-09-01

    Recently there has been an intensive development of intelligent industrial equipment that is highly automated and can be rapidly adjusted for certain details. This equipment can be robotics systems, automatic wrappers and markers, CNC machines and 3D printers. The work equipment considered is the system for selective curing of photopolymers using a UV-laser and UV-radiation in such equipment that leads to additional technical difficulties. In many cases for transporting the radiation from the laser to the point processed, a multi-mirror system is used: however, such systems are usually difficult to adjust. Additionally, such multi-mirror systems are usually used as a part of the equipment for laser cutting of metals using high-power IR-lasers. For the UV-lasers, using many mirrors leads to crucial radiation losses because of many reflections. Therefore, during the development of the optical system for technological equipment using UV-laser we need to solve two main problems: to transfer the radiation for the working point with minimum losses and to include the system for controlling/handling the radiation spot position. We introduce a system for working with UV-lasers with 450mW of power and a wavelength of 0.45 μm based on a fiber system. In our modelling and design, we achieve spot sizes of about 300 μm, and the designed optical and mechanical systems (prototypes) were manufactured and assembled. In this paper, we present the layout of the technological unit, the results of the theoretical modelling of some parts of the system and some experimental results.

  17. Cost analysis of lasers for a laser isotope separation system. Final report

    International Nuclear Information System (INIS)

    Mail, R.A.; Markovich, F.J.; Carr, R.H.

    1977-01-01

    To be of practical significance, laser isotope separation (LIS) for separation of 235 U from 238 U must exhibit attributes which make it preferable to expansion of the present facilities. Clearly the most attractive such attribute is the prospect of significant cost reductions, which preliminary studies at LLL suggest will amount to a factor of three and perhaps as much as ten. From these preliminary studies, it appears that the lasers themselves account for a very substantial portion of the capital cost of a LIS system, and a significant portion of the equipment replacement costs. Since the laser costs are so pivotal to the system cost, and the system cost is so pivotal to the choice of separation techniques, it is clear that a more detailed investigation of laser costs is required. Results are presented of a study performed by General Research Corporation (GRC) to assess the cost of lasers in a production laser isotope separation (LIS) plant

  18. Short pulse laser systems for biomedical applications

    CERN Document Server

    Mitra, Kunal

    2017-01-01

    This book presents practical information on the clinical applications of short pulse laser systems and the techniques for optimizing these applications in a manner that will be relevant to a broad audience, including engineering and medical students as well as researchers, clinicians, and technicians. Short pulse laser systems are useful for both subsurface tissue imaging and laser induced thermal therapy (LITT), which hold great promise in cancer diagnostics and treatment. Such laser systems may be used alone or in combination with optically active nanoparticles specifically administered to the tissues of interest for enhanced contrast in imaging and precise heating during LITT. Mathematical and computational models of short pulse laser-tissue interactions that consider the transient radiative transport equation coupled with a bio-heat equation considering the initial transients of laser heating were developed to analyze the laser-tissue interaction during imaging and therapy. Experiments were first performe...

  19. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Science.gov (United States)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  20. Computer vision in roadway transportation systems: a survey

    Science.gov (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  1. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  2. SailSpy: a vision system for yacht sail shape measurement

    Science.gov (United States)

    Olsson, Olof J.; Power, P. Wayne; Bowman, Chris C.; Palmer, G. Terry; Clist, Roger S.

    1992-11-01

    SailSpy is a real-time vision system which we have developed for automatically measuring sail shapes and masthead rotation on racing yachts. Versions have been used by the New Zealand team in two America's Cup challenges in 1988 and 1992. SailSpy uses four miniature video cameras mounted at the top of the mast to provide views of the headsail and mainsail on either tack. The cameras are connected to the SailSpy computer below deck using lightweight cables mounted inside the mast. Images received from the cameras are automatically analyzed by the SailSpy computer, and sail shape and mast rotation parameters are calculated. The sail shape parameters are calculated by recognizing sail markers (ellipses) that have been attached to the sails, and the mast rotation parameters by recognizing deck markers painted on the deck. This paper describes the SailSpy system and some of the vision algorithms used.

  3. Laser Safety and Hazard Analysis for the Trailer (B70) Based AURA Laser System

    International Nuclear Information System (INIS)

    AUGUSTONI, ARNOLD L.

    2003-01-01

    A laser safety and hazard analysis was performed for the AURA laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for ''Safe Use of Lasers'' and the 2000 version of the ANSI Standard Z136.6, for ''Safe Use of Lasers Outdoors''. The trailer based AURA laser system is a mobile platform, which is used to perform laser interaction experiments and tests at various national test sites. The trailer (B70) based AURA laser system is generally operated on the United State Air Force Starfire Optical Range (SOR) at Kirtland Air Force Base (KAFB), New Mexico. The laser is used to perform laser interaction testing inside the laser trailer as well as outside the trailer at target sites located at various distances from the exit telescope. In order to protect personnel, who work inside the Nominal Hazard Zone (NHZ), from hazardous laser emission exposures it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength (wavelength bands) and calculate the appropriate minimum Optical Density (OD min ) of the laser safety eyewear used by authorized personnel and the Nominal Ocular Hazard Distance (NOHD) to protect unauthorized personnel who may have violated the boundaries of the control area and enter into the laser's NHZ

  4. Laser Pyro System Standardization and Man Rating

    Science.gov (United States)

    Brown, Christopher W.

    2004-01-01

    This viewgraph presentation reviews an X-38 laser pyro system standardization system designed for a new manned rated program. The plans to approve this laser initiation system and preliminary ideas for this system are also provided.

  5. Radiological protection against lasers

    Energy Technology Data Exchange (ETDEWEB)

    Ballereau, P

    1974-04-01

    A brief description of the biological effects of laser beams is followed by a review of the factors involved in eye and skin damage (factors linked with the nature of lasers and those linked with the organ affected) and a discussion of the problems involved in the determination of threshold exposure levels. Preventive measures are recommended, according to the type of laser (high-energy pulse laser, continuous laser, gas laser). No legislation on the subject exists in France or in Europe. Types of lasers marketed, threshold exposure levels for eye and skin, variations of admissible exposure levels according to wavelength, etc. are presented in tabular form. Nomogram for determination of safe distance for direct vision of a laser is included.

  6. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition...

  7. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  8. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  9. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  10. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  11. Comparing wavefront-optimized, wavefront-guided and topography-guided laser vision correction: clinical outcomes using an objective decision tree.

    Science.gov (United States)

    Stonecipher, Karl; Parrish, Joseph; Stonecipher, Megan

    2018-05-18

    This review is intended to update and educate the reader on the currently available options for laser vision correction, more specifically, laser-assisted in-situ keratomileusis (LASIK). In addition, some related clinical outcomes data from over 1000 cases performed over a 1-year are presented to highlight some differences between the various treatment profiles currently available including the rapidity of visual recovery. The cases in question were performed on the basis of a decision tree to segregate patients on the basis of anatomical, topographic and aberrometry findings; the decision tree was formulated based on the data available in some of the reviewed articles. Numerous recent studies reported in the literature provide data related to the risks and benefits of LASIK; alternatives to a laser refractive procedure are also discussed. The results from these studies have been used to prepare a decision tree to assist the surgeon in choosing the best option for the patient based on the data from several standard preoperative diagnostic tests. The data presented here should aid surgeons in understanding the effects of currently available LASIK treatment profiles. Surgeons should also be able to appreciate how the findings were used to create a decision tree to help choose the most appropriate treatment profile for patients. Finally, the retrospective evaluation of clinical outcomes based on the decision tree should provide surgeons with a realistic expectation for their own outcomes should they adopt such a decision tree in their own practice.

  12. Modeling foveal vision

    NARCIS (Netherlands)

    Florack, L.M.J.; Sgallari, F.; Murli, A.; Paragios, N.

    2007-01-01

    geometric model is proposed for an artificial foveal vision system, and its plausibility in the context of biological vision is explored. The model is based on an isotropic, scale invariant two-form that describes the spatial layout of receptive fields in the the visual sensorium (in the biological

  13. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  14. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  15. Interoperability Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.; Narang, David; Martin, Maurice; Nordman, Bruce; Khandekar, Aditya; Hardy, Keith S.

    2018-02-28

    The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levels to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.

  16. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  17. Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor

    Directory of Open Access Journals (Sweden)

    D. Bauer

    2007-01-01

    Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μs, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.

  18. 21 CFR 884.6200 - Assisted reproduction laser system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Assisted reproduction laser system. 884.6200... (CONTINUED) MEDICAL DEVICES OBSTETRICAL AND GYNECOLOGICAL DEVICES Assisted Reproduction Devices § 884.6200 Assisted reproduction laser system. (a) Identification. The assisted reproduction laser system is a device...

  19. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  20. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  1. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2013-01-01

    Full Text Available With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  2. Repetitive output laser system and method using target reflectivity

    International Nuclear Information System (INIS)

    Johnson, R.R.

    1978-01-01

    An improved laser system and method for implosion of a thermonuclear fuel pellet is described in which that portion of a laser pulse reflected by the target pellet is utilized in the laser system to initiate a succeeding target implosion, and in which the energy stored in the laser system to amplify the initial laser pulse, but not completely absorbed thereby, is used to amplify succeeding laser pulses initiated by target refγlection

  3. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  4. Local annealing of shape memory alloys using laser scanning and computer vision

    Science.gov (United States)

    Hafez, Moustapha; Bellouard, Yves; Sidler, Thomas C.; Clavel, Reymond; Salathe, Rene-Paul

    2000-11-01

    A complete set-up for local annealing of Shape Memory Alloys (SMA) is proposed. Such alloys, when plastically deformed at a given low temperature, have the ability to recover a previously memorized shape simply by heating up to a higher temperature. They find more and more applications in the fields of robotics and micro engineering. There is a tremendous advantage in using local annealing because this process can produce monolithic parts, which have different mechanical behavior at different location of the same body. Using this approach, it is possible to integrate all the functionality of a device within one piece of material. The set-up is based on a 2W-laser diode emitting at 805nm and a scanner head. The laser beam is coupled into an optical fiber of 60(mu) in diameter. The fiber output is focused on the SMA work-piece using a relay lens system with a 1:1 magnification, resulting in a spot diameter of 60(mu) . An imaging system is used to control the position of the laser spot on the sample. In order to displace the spot on the surface a tip/tilt laser scanner is used. The scanner is positioned in a pre-objective configuration and allows a scan field size of more than 10 x 10 mm2. A graphical user interface of the scan field allows the user to quickly set up marks and alter their placement and power density. This is achieved by computer controlling X and Y positions of the scanner as well as the laser diode power. A SMA micro-gripper with a surface area less than 1 mm2 and an opening of the jaws of 200(mu) has been realized using this set-up. It is electrically actuated and a controlled force of 16mN can be applied to hold and release small objects such as graded index micro-lenses at a cycle time of typically 1s.

  5. Laser surveillance system for spent fuel

    International Nuclear Information System (INIS)

    Fiarman, S.; Zucker, M.S.; Bieber, A.M. Jr.

    1980-01-01

    A laser surveillance system installed at spent fuel storage pools will provide the safeguard inspector with specific knowledge of spent fuel movement that cannot be obtained with current surveillance systems. The laser system will allow for the division of the pool's spent fuel inventory into two populations - those assemblies which have been moved and those which haven't - which is essential for maximizing the efficiency and effectiveness of the inspection effort. We have designed, constructed, and tested a laser system and have used it with a simulated BWR assembly. The reflected signal from the zircaloy rods depends on the position of the assembly, but in all cases is easily discernable from the reference scan of background with no assembly

  6. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  7. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  8. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  9. Development of broadband free electron laser technology

    International Nuclear Information System (INIS)

    Lee, B. C.; Jeong, Y. W.; Joe, S. O.; Park, S. H.; Ryu, J. K.; Kazakevich, G.; Cha, H. J.; Sohn, S. C.; Han, S. J.

    2003-02-01

    Layer cladding technology was developed to mitigate the fretting wear damages occurred at fuel spacers in Hanaro reactor. The detailed experimental procedures are as follows. 1) Analyses of fretting wear damages and fabrication process of fuel spacers 2) Development and analysis of spherical Al 6061 T-6 alloy powders for the laser cladding 3) Analysis of parameter effects on laser cladding process for clad bids, and optimization of laser cladding process 4) Analysis on the changes of cladding layers due to overlapping factor change 5) Microstructural observation and phase analysis 6) Characterization of materials properties (hardness wear tests) 7) Development of a vision system and revision of its related software 8) Manufacture of prototype fuel spacers. As a result, it was confirmed that the laser cladding technology could increased considerably the wear resistance of Al 6061 alloy which is the raw material of fuel spacers.

  10. System for synthetic vision and augmented reality in future flight decks

    Science.gov (United States)

    Behringer, Reinhold; Tam, Clement K.; McGee, Joshua H.; Sundareswaran, Venkataraman; Vassiliou, Marius S.

    2000-06-01

    Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.

  11. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  12. Close coupling of pre- and post-processing vision stations using inexact algorithms

    Science.gov (United States)

    Shih, Chi-Hsien V.; Sherkat, Nasser; Thomas, Peter D.

    1996-02-01

    Work has been reported using lasers to cut deformable materials. Although the use of laser reduces material deformation, distortion due to mechanical feed misalignment persists. Changes in the lace patten are also caused by the release of tension in the lace structure as it is cut. To tackle the problem of distortion due to material flexibility, the 2VMethod together with the Piecewise Error Compensation Algorithm incorporating the inexact algorithms, i.e., fuzzy logic, neural networks and neural fuzzy technique, are developed. A spring mounted pen is used to emulate the distortion of the lace pattern caused by tactile cutting and feed misalignment. Using pre- and post-processing vision systems, it is possible to monitor the scalloping process and generate on-line information for the artificial intelligence engines. This overcomes the problems of lace distortion due to the trimming process. Applying the algorithms developed, the system can produce excellent results, much better than a human operator.

  13. Keratoprotection treatment after excimerlaser vision correction

    Directory of Open Access Journals (Sweden)

    S. A. Korotkikh

    2015-01-01

    Full Text Available Observational study of patients after excimerlaser vision correction by LASEK method.Purpose: to estimate efficacy HILOZAR-COMOD® solution in patients after excimerlaser vision correction.Patients and methods: We examined 80 eyes (40 patients after excimer laser correction by LASIK method. All patients were divided in 2 groups. The patients from the first group were treated with by standard drug’s scheme, included deprotein izing the dialysate from the blood of healthy dairy calves (Solkoseryl® eye gel. HILOZAR-COMOD ® was prescribed as cornea protector in the 2 group of patients.Results: In the first group complete cornea epitalization by biomicroscopy in 70 % eyes after 48 hours of excimerlaser vision correction was found. Minimal unepitalization areas were diagnosed in 30 % (12 eyes. In the second group complete cornea epitalization was found in 82,5 % eyes (33 eyes in the same term after excimerlaser correction. Cornea epitelium defects in optical area were diagnosed in the17,5 % eyes. The difference between number of patients with cornea epitelium defects first and second groups was 12,5 %. 97,5 % patients (39 eyes of second group (HILOZAR-COMOD ® had complete cornea epitelization after 72 hours of excimer laser correction. In the same term unepitelization areas were found in 3 eyes (7,5 % in patients of first group. It was to 5 % more than in the first group, where dexpantenol and hyaluronic acid was used (complete cornea epitalization in first group was found in 37 eyes.Conclusions: The combined medicine included dexpantenol and hyaluronic acid decreases intensity of the dry eye symptoms, stimulate quick and full cornea healing and decrease the risk of postoperative complications risk.

  14. Improvements of the ruby laser oscillator system for laser scattering

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko; Kumagai, Katsuaki; Kawakami, Tomohide; Matoba, Tohru; Funahashi, Akimasa

    1978-10-01

    A ruby laser oscillator system is used to measure electron temperatures of the Tokamak plasmas(JFT-2 and JFT-2a). Improvements have been made of the laser oscillator to obtain the correct values. Described are the improvements and the damages of a ruby rod and a KD*P crystal for Q-switching by laser beam. Improvement are the linear Xe lamp replaced by a helical Xe lamp and in the electrical circuit for Q-switching. The damage of an optical component by a laser beam should be clarified from the damage data; the cause is not found yet. (author)

  15. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  16. Problems in the development of autonomous mobile laser systems based on a cw chemical DF laser

    International Nuclear Information System (INIS)

    Aleksandrov, B P; Bashkin, A S; Beznozdrev, V N; Parfen'ev, M V; Pirogov, N A; Semenov, S N

    2003-01-01

    The problems involved in designing autonomous mobile laser systems based on high-power cw chemical DF lasers, whose mass and size parameters would make it possible to install them on various vehicles, are discussed. The need for mobility of such lasers necessitates special attention to be paid to the quest for ways and means of reducing the mass and size of the main laser systems. The optimisation of the parameters of such lasers is studied for various methods of scaling their systems. A complex approach to analysis of the optical scheme of the laser system is developed. (special issue devoted to the 80th anniversary of academician n g basov's birth)

  17. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    Directory of Open Access Journals (Sweden)

    Asraf Ali

    2012-08-01

    Full Text Available Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders.

  18. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  19. System for combining laser beams of diverse frequencies

    International Nuclear Information System (INIS)

    1980-01-01

    A system is described for combining laser beams of different frequencies into a number of beams each comprising laser radiation having components of each of the different frequencies. The system can be used in laser isotope separation facilities. (U.K.)

  20. Modematic: a fast laser beam analyzing system for high power CO2-laser beams

    Science.gov (United States)

    Olsen, Flemming O.; Ulrich, Dan

    2003-03-01

    The performance of an industrial laser is very much depending upon the characteristics of the laser beam. The ISO standards 11146 and 11154 describing test methods for laser beam parameters have been approved. To implement these methods in industry is difficult and especially for the infrared laser sources, such as the CO2-laser, the availabl analyzing systems are slow, difficult to apply and having limited reliability due to the nature of the detection methods. In an EUREKA-project the goal was defined to develop a laser beam analyzing system dedicated to high power CO2-lasers, which could fulfill the demands for an entire analyzing system, automating the time consuming pre-alignment and beam conditioning work required before a beam mode analyses, automating the analyzing sequences and data analysis required to determine the laser beam caustics and last but not least to deliver reliable close to real time data to the operator. The results of this project work will be described in this paper. The research project has led to the development of the Modematic laser beam analyzer, which is ready for the market.

  1. Multiplex electric discharge gas laser system

    Science.gov (United States)

    Laudenslager, James B. (Inventor); Pacala, Thomas J. (Inventor)

    1987-01-01

    A multiple pulse electric discharge gas laser system is described in which a plurality of pulsed electric discharge gas lasers are supported in a common housing. Each laser is supplied with excitation pulses from a separate power supply. A controller, which may be a microprocessor, is connected to each power supply for controlling the application of excitation pulses to each laser so that the lasers can be fired simultaneously or in any desired sequence. The output light beams from the individual lasers may be combined or utilized independently, depending on the desired application. The individual lasers may include multiple pairs of discharge electrodes with a separate power supply connected across each electrode pair so that multiple light output beams can be generated from a single laser tube and combined or utilized separately.

  2. System of error detection in the manufacture of garments using artificial vision

    Science.gov (United States)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  3. Machine vision system for automated detection of stained pistachio nuts

    Science.gov (United States)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  4. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  5. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  6. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  7. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  8. Laser and photonic systems design and integration

    CERN Document Server

    Nof, Shimon Y; Cheng, Gary J

    2014-01-01

    New, significant scientific discoveries in laser and photonic technologies, systems perspectives, and integrated design approaches can improve even further the impact in critical areas of challenge. Yet this knowledge is dispersed across several disciplines and research arenas. Laser and Photonic Systems: Design and Integration brings together a multidisciplinary group of experts to increase understanding of the ways in which systems perspectives may influence laser and photonic innovations and application integration.By bringing together chapters from leading scientists and technologists, ind

  9. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    Science.gov (United States)

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  10. Lasers in tattoo and pigmentation control: role of the PicoSure(®) laser system.

    Science.gov (United States)

    Torbeck, Richard; Bankowski, Richard; Henize, Sarah; Saedi, Nazanin

    2016-01-01

    The use of picosecond lasers to remove tattoos has greatly improved due to the long-standing outcomes of nanosecond lasers, both clinically and histologically. The first aesthetic picosecond laser available for this use was the PicoSure(®) laser system (755/532 nm). Now that a vast amount of research on its use has been conducted, we performed a comprehensive review of the literature to validate the continued application of the PicoSure(®) laser system for tattoo removal. A PubMed search was conducted using the term "picosecond" combined with "laser", "dermatology", and "laser tattoo removal". A total of 13 articles were identified, and ten of these met the inclusion criteria for this review. The majority of studies showed that picosecond lasers are an effective and safe treatment mode for the removal of tattoo pigments. Several studies also indicated potential novel applications of picosecond lasers in the removal of various tattoo pigments (eg, black, red, and yellow). Adverse effects were generally mild, such as transient hypopigmentation or blister formation, and were rarely more serious, such as scarring and/or textural change. Advancements in laser technologies and their application in cutaneous medicine have revolutionized the field of laser surgery. Computational modeling provides evidence that the optimal pulse durations for tattoo ink removal are in the picosecond domain. It is recommended that the PicoSure(®) laser system continue to be used for safe and effective tattoo removal, including for red and yellow pigments.

  11. Vision system for measuring wagon buffers’ lateral movements

    Directory of Open Access Journals (Sweden)

    Barjaktarović Marko

    2013-01-01

    Full Text Available This paper presents a vision system designed for measuring horizontal and vertical displacements of a railway wagon body. The model comprises a commercial webcam and a cooperative target of an appropriate shape. The lateral buffer movement is determined by calculating target displacement in real time by processing the camera image in a LabVIEW platform using free OpenCV library. Laboratory experiments demonstrate an accuracy which is better than ±0.5 mm within a 50 mm measuring range.

  12. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.

    Science.gov (United States)

    Grifantini, Kristina

    2017-01-01

    Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.

  13. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  14. Sistema de visão por infravermelho próximo para monitoramento de processos de soldagem a arco Near-infrared vision system for arc-welding monitoring

    Directory of Open Access Journals (Sweden)

    Carolina Pimenta Mota

    2013-03-01

    desenvolvido proporcionou uma iluminação homogênea e em sincronia com a câmera, sendo que a principal limitação foi o grande tempo de exposição da câmera disponível. Sugere-se, ao final, utilizar o sistema na forma de um seguidor de juntas.Vision, the human being's favorite sense, and its great capacity to obtain, to process and to interpret great amount of visual nature data has been throughout the years a great inspiration for development of techniques and technological devices that reproduce it into a computational system. In welding processes, vision can supply information in inspection and welded joint's quality, in the parameters' monitoring, in trajectory correction and even, finally, in the study of the phenomena involved in the process. However, the luminosity/radiation emitted from the weld arc represents a barrier for these studies based in the process visualization. One of the forms currently used to visualize the process, without the interference of the arc's light, consists of illuminating the process with the near infrared light and, using band pass (interference filters, around this exactly wave length, during the acquisition of the images. A solution for the near infrared illumination, of increasing application, involves the use of laser diodes of high power, with low cost and less complex installation than conventional lasers. Therefore, the proposal of this work is the project, construction and assessment of a vision system for welding processes with low cost and high flexibility. It is based on characterization of the spectrum of the weld arc, definition of a drive topology for the laser diode within its limitations of use and maximizing the emitted luminous power, built of control circuits, selection of optics equipment and components and, finally, project and application of a prototype for visualization of different arc-welding processes. The final assessment of the whole vision system was carried out during TIG and MIG/MAG welding. Although

  15. Laser-start-up system for magnetic mirror fusion

    International Nuclear Information System (INIS)

    Frank, A.M.; Thomas, S.R.; Denhoy, B.S.; Chargin, A.K.

    1976-01-01

    A CO 2 laser system has been developed at LLL to provide hot start-up plasmas for magnetic mirror fusion experiments. A frozen ammonia pellet is irradiated with a laser power density in excess of 10 13 W/cm 2 in a 50-ns pulse. This system uses commercially available laser systems. Optical components were fabricated both by direct machining and standard techniques. The technologies used in this system are directly applicable to reactor scale systems

  16. Combined laser ultrasonics, laser heating, and Raman scattering in diamond anvil cell system

    Science.gov (United States)

    Zinin, Pavel V.; Prakapenka, Vitali B.; Burgess, Katherine; Odake, Shoko; Chigarev, Nikolay; Sharma, Shiv K.

    2016-12-01

    We developed a multi-functional in situ measurement system under high pressure equipped with a laser ultrasonics (LU) system, Raman device, and laser heating system (LU-LH) in a diamond anvil cell (DAC). The system consists of four components: (1) a LU-DAC system (probe and pump lasers, photodetector, and oscilloscope) and DAC; (2) a fiber laser, which is designed to allow precise control of the total power in the range from 2 to 100 W by changing the diode current, for heating samples; (3) a spectrometer for measuring the temperature of the sample (using black body radiation), fluorescence spectrum (spectrum of the ruby for pressure measurement), and Raman scattering measurements inside a DAC under high pressure and high temperature (HPHT) conditions; and (4) an optical system to focus laser beams on the sample and image it in the DAC. The system is unique and allows us to do the following: (a) measure the shear and longitudinal velocities of non-transparent materials under HPHT; (b) measure temperature in a DAC under HPHT conditions using Planck's law; (c) measure pressure in a DAC using a Raman signal; and (d) measure acoustical properties of small flat specimens removed from the DAC after HPHT treatment. In this report, we demonstrate that the LU-LH-DAC system allows measurements of velocities of the skimming waves in iron at 2580 K and 22 GPa.

  17. State of the art of CO laser angioplasty system

    Science.gov (United States)

    Arai, Tsunenori; Mizuno, Kyoichi; Miyamoto, Akira; Sakurada, Masami; Kikuchi, Makoto; Kurita, Akira; Nakamura, Haruo; Takaoka, Hidetsugu; Utsumi, Atsushi; Takeuchi, Kiyoshi

    1994-07-01

    A unique percutaneous transluminal coronary angioplasty system new IR therapy laser with IR glass fiber delivery under novel angioscope guidance was described. Carbon monoxide (CO) laser emission of 5 mm in wavelength was employed as therapy laser to achieve precise ablation of atheromatous plaque with a flexible As-S IR glass fiber for laser delivery. We developed the first medical CO laser as well as As-S IR glass fiber cable. We also developed 5.5 Fr. thin angioscope catheter with complete directional manipulatability at its tip. The system control unit could manage to prevent failure irradiations and fiber damages. This novel angioplasty system was evaluated by a stenosis model of mongrel dogs. We demonstrated the usefulness of our system to overcome current issues on laser angioplasty using multifiber catheter with over-the-guidewire system.

  18. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  19. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  20. Underwater laser imaging system (UWLIS)

    Energy Technology Data Exchange (ETDEWEB)

    DeLong, M. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Practical limitations with underwater imaging systems area reached when the noise in the back scattered radiation generated in the water between the imaging system and the target obscures the spatial contrast and resolution necessary for target discovery and identification. The advent of high power lasers operating in the blue-green portion of the visible spectrum (oceanic transmission window) has led to improved experimental illumination systems for underwater imaging. Range-gated and synchronously scanned devices take advantage of the unique temporal and spatial coherence properties of laser radiation, respectively, to overcome the deleterious effects of common volume back scatter.

  1. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  2. Container-code recognition system based on computer vision and deep neural networks

    Science.gov (United States)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  3. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    International Nuclear Information System (INIS)

    Jacobson, Jacob J.; Jeffers, Robert F.; Matthern, Gretchen E.; Piet, Steven J.; Baker, Benjamin A.; Grimm, Joseph

    2009-01-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R and D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft

  4. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    Science.gov (United States)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  5. Performance of the Aurora KrF ICF laser system

    International Nuclear Information System (INIS)

    Jones, J.E.; Czuchlewski, S.J.; Turner, T.P.; Watt, R.G.; Thomas, S.J.; Netz, D.A.; Tallman, C.R.; Mack, J.M.; Figueira, J.F.

    1990-01-01

    Because short wavelength lasers are attractive for inertial confinement fusion (ICF), the Department of Energy is sponsoring work at Los Alamos National Laboratory in krypton-fluoride (KrF) laser technology. Aurora is a short-pulse, high-power, KrF laser system. It serves as an end-to-end technology demonstration prototype for large-scale ultraviolet laser systems for short wavelength ICF research. The system employs optical angular multiplexing and serial amplification by electron-beam-driven KrF laser amplifiers. The 1 to 5 ns pulse of the Aurora front end is split into 96 beams which are angularly and temporally multiplexed to produce a 480 ns pulse train for amplification by four KrF laser amplifiers. In the present system configuration half (48) of the amplified pulses are demultiplexed using different optical path lengths and delivered simultaneously to target. This paper discusses how the Aurora laser system has entered the initial operational phase by delivering pulse energies of greater than one kilojoule to target

  6. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  7. Low-Power Smart Imagers for Vision-Enabled Sensor Networks

    CERN Document Server

    Fernández-Berni, Jorge; Rodríguez-Vázquez, Ángel

    2012-01-01

    This book presents a comprehensive, systematic approach to the development of vision system architectures that employ sensory-processing concurrency and parallel processing to meet the autonomy challenges posed by a variety of safety and surveillance applications.  Coverage includes a thorough analysis of resistive diffusion networks embedded within an image sensor array. This analysis supports a systematic approach to the design of spatial image filters and their implementation as vision chips in CMOS technology. The book also addresses system-level considerations pertaining to the embedding of these vision chips into vision-enabled wireless sensor networks.  Describes a system-level approach for designing of vision devices and  embedding them into vision-enabled, wireless sensor networks; Surveys state-of-the-art, vision-enabled WSN nodes; Includes details of specifications and challenges of vision-enabled WSNs; Explains architectures for low-energy CMOS vision chips with embedded, programmable spatial f...

  8. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  9. Computer vision as an alternative for collision detection

    OpenAIRE

    Drangsholt, Marius Aarvik

    2015-01-01

    The goal of this thesis was to implement a computer vision system on a low power platform, to see if that could be an alternative for a collision detection system. To achieve this, research into fundamentals in computer vision were performed, and both hardware and software implementation were carried out. To create the computer vision system, a stereo rig were constructed using low cost Logitech webcameras, and connected to a Raspberry Pi 2 development board. The computer vision library Op...

  10. Nonlinear optical systems

    CERN Document Server

    Lugiato, Luigi; Brambilla, Massimo

    2015-01-01

    Guiding graduate students and researchers through the complex world of laser physics and nonlinear optics, this book provides an in-depth exploration of the dynamics of lasers and other relevant optical systems, under the umbrella of a unitary spatio-temporal vision. Adopting a balanced approach, the book covers traditional as well as special topics in laser physics, quantum electronics and nonlinear optics, treating them from the viewpoint of nonlinear dynamical systems. These include laser emission, frequency generation, solitons, optically bistable systems, pulsations and chaos and optical pattern formation. It also provides a coherent and up-to-date treatment of the hierarchy of nonlinear optical models and of the rich variety of phenomena they describe, helping readers to understand the limits of validity of each model and the connections among the phenomena. It is ideal for graduate students and researchers in nonlinear optics, quantum electronics, laser physics and photonics.

  11. Laser-based agriculture system

    KAUST Repository

    Ooi, Boon S.

    2016-03-31

    A system and method are provided for indoor agriculture using at least one growth chamber illuminated by laser light. In an example embodiment of the agriculture system, a growth chamber is provided having one or more walls defining an interior portion of the growth chamber. The agriculture system may include a removable tray disposed within the interior portion of the growth chamber. The agriculture system also includes a light source, which may be disposed outside the growth chamber. The one or more walls may include at least one aperture. The light source is configured to illuminate at least a part of the interior portion of the growth chamber. In embodiments in which the light source is disposed outside the growth chamber, the light source is configured to transmit the laser light to the interior portion of the growth chamber via the at least one aperture.

  12. Laser-based agriculture system

    KAUST Repository

    Ooi, Boon S.; Wong, Aloysius Tze; Ng, Tien Khee

    2016-01-01

    A system and method are provided for indoor agriculture using at least one growth chamber illuminated by laser light. In an example embodiment of the agriculture system, a growth chamber is provided having one or more walls defining an interior portion of the growth chamber. The agriculture system may include a removable tray disposed within the interior portion of the growth chamber. The agriculture system also includes a light source, which may be disposed outside the growth chamber. The one or more walls may include at least one aperture. The light source is configured to illuminate at least a part of the interior portion of the growth chamber. In embodiments in which the light source is disposed outside the growth chamber, the light source is configured to transmit the laser light to the interior portion of the growth chamber via the at least one aperture.

  13. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  14. PHASE NOISE COMPARISON OF SHORT PULSE LASER SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Shukui Zhang; Stephen Benson; John Hansknecht; David Hardy; George Neil; Michelle D. Shinn

    2006-08-27

    This paper describes phase noise measurements of several different laser systems that have completely different gain media and configurations including a multi-kW free-electron laser. We will focus on state-of-the-art short pulse lasers, especially drive lasers for photocathode injectors. Phase noise comparison of the FEL drive laser, electron beam and FEL laser output also will be presented.

  15. High power laser downhole cutting tools and systems

    Science.gov (United States)

    Zediker, Mark S; Rinzler, Charles C; Faircloth, Brian O; Koblick, Yeshaya; Moxley, Joel F

    2015-01-20

    Downhole cutting systems, devices and methods for utilizing 10 kW or more laser energy transmitted deep into the earth with the suppression of associated nonlinear phenomena. Systems and devices for the laser cutting operations within a borehole in the earth. These systems and devices can deliver high power laser energy down a deep borehole, while maintaining the high power to perform cutting operations in such boreholes deep within the earth.

  16. Visions, Scenarios and Action Plans Towards Next Generation Tanzania Power System

    Directory of Open Access Journals (Sweden)

    Alex Kyaruzi

    2012-10-01

    Full Text Available This paper presents strategic visions, scenarios and action plans for enhancing Tanzania Power Systems towards next generation Smart Power Grid. It first introduces the present Tanzanian power grid and the challenges ahead in terms of generation capacity, financial aspect, technical and non-technical losses, revenue loss, high tariff, aging infrastructure, environmental impact and the interconnection with the neighboring countries. Then, the current initiatives undertaken by the Tanzania government in response to the present challenges and the expected roles of smart grid in overcoming these challenges in the future with respect to the scenarios presented are discussed. The developed scenarios along with visions and recommended action plans towards the future Tanzanian power system can be exploited at all governmental levels to achieve public policy goals and help develop business opportunities by motivating domestic and international investments in modernizing the nation’s electric power infrastructure. In return, it should help build the green energy economy.

  17. Lasers in tattoo and pigmentation control: role of the PicoSure® laser system

    Science.gov (United States)

    Torbeck, Richard; Bankowski, Richard; Henize, Sarah; Saedi, Nazanin

    2016-01-01

    Background and objectives The use of picosecond lasers to remove tattoos has greatly improved due to the long-standing outcomes of nanosecond lasers, both clinically and histologically. The first aesthetic picosecond laser available for this use was the PicoSure® laser system (755/532 nm). Now that a vast amount of research on its use has been conducted, we performed a comprehensive review of the literature to validate the continued application of the PicoSure® laser system for tattoo removal. Study design and methods A PubMed search was conducted using the term “picosecond” combined with “laser”, “dermatology”, and “laser tattoo removal”. Results A total of 13 articles were identified, and ten of these met the inclusion criteria for this review. The majority of studies showed that picosecond lasers are an effective and safe treatment mode for the removal of tattoo pigments. Several studies also indicated potential novel applications of picosecond lasers in the removal of various tattoo pigments (eg, black, red, and yellow). Adverse effects were generally mild, such as transient hypopigmentation or blister formation, and were rarely more serious, such as scarring and/or textural change. Conclusion Advancements in laser technologies and their application in cutaneous medicine have revolutionized the field of laser surgery. Computational modeling provides evidence that the optimal pulse durations for tattoo ink removal are in the picosecond domain. It is recommended that the PicoSure® laser system continue to be used for safe and effective tattoo removal, including for red and yellow pigments. PMID:27194919

  18. Non-contact finger vein acquisition system using NIR laser

    Science.gov (United States)

    Kim, Jiman; Kong, Hyoun-Joong; Park, Sangyun; Noh, SeungWoo; Lee, Seung-Rae; Kim, Taejeong; Kim, Hee Chan

    2009-02-01

    Authentication using finger vein pattern has substantial advantage than other biometrics. Because human vein patterns are hidden inside the skin and tissue, it is hard to forge vein structure. But conventional system using NIR LED array has two drawbacks. First, direct contact with LED array raise sanitary problem. Second, because of discreteness of LEDs, non-uniform illumination exists. We propose non-contact finger vein acquisition system using NIR laser and Laser line generator lens. Laser line generator lens makes evenly distributed line laser from focused laser light. Line laser is aimed on the finger longitudinally. NIR camera was used for image acquisition. 200 index finger vein images from 20 candidates are collected. Same finger vein pattern extraction algorithm was used to evaluate two sets of images. Acquired images from proposed non-contact system do not show any non-uniform illumination in contrary with conventional system. Also results of matching are comparable to conventional system. We developed Non-contact finger vein acquisition system. It can prevent potential cross contamination of skin diseases. Also the system can produce uniformly illuminated images unlike conventional system. With the benefit of non-contact, proposed system shows almost equivalent performance compared with conventional system.

  19. Excimer laser ablation of the cornea

    Science.gov (United States)

    Pettit, George H.; Ediger, Marwood N.; Weiblinger, Richard P.

    1995-03-01

    Pulsed ultraviolet laser ablation is being extensively investigated clinically to reshape the optical surface of the eye and correct vision defects. Current knowledge of the laser/tissue interaction and the present state of the clinical evaluation are reviewed. In addition, the principal findings of internal Food and Drug Administration research are described in some detail, including a risk assessment of the laser-induced-fluorescence and measurement of the nonlinear optical properties of cornea during the intense UV irradiation. Finally, a survey is presented of the alternative laser technologies being explored for this ophthalmic application.

  20. Demonstration of Laser Cutting System for Tube Specimen

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Y. G.; Kim, G. S.; Heo, G. S.; Baik, S. J.; Kim, H. M.; Ahn, S. B. [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The oxide layer removal system was also developed because the oxide layer on the surface of the irradiated fuel cladding and components interrupted the applying the electric current during the processing. However, it was found that the mechanical testing data of the irradiated specimens with removal of oxide layer was less reliable than the specimens with oxide layer . The laser cutting system using Nd:YAG with fiber optic beam delivery has great potential in material processing applications of the irradiated fuel cladding and components due to non-contact process. Thus, the oxide layer doesn't interrupt the fabrication process during the laser cutting system. In the present study, the laser cutting system was designed to fabricate the mechanical testing specimens from the unirradiated fuel cladding with and without oxide. The feasibility of the laser cutting system was demonstrated for the fabrication of various types of unirradiated specimens. The effect of surface oxide layer was also investigated for machining process of the zirlo fuel cladding and it was found that laser beam machining could be a useful tool to fabricate the specimens with surface oxide layer. Based on the feasibility studies and demonstration, the design of the laser cutting machine for fully or partially automatic and remotely operable system will be proposed and made.

  1. International Border Management Systems (IBMS) Program : visions and strategies.

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  2. Lasers in tattoo and pigmentation control: role of the PicoSure® laser system

    Directory of Open Access Journals (Sweden)

    Torbeck R

    2016-05-01

    Full Text Available Richard Torbeck,1 Richard Bankowski,2 Sarah Henize,3 Nazanin Saedi,11Department of Dermatology and Cutaneous Biology, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, 2Cynosure, Inc, Westford, MA, 3Huron Consulting Group, Chicago, IL, USABackground and objectives: The use of picosecond lasers to remove tattoos has greatly improved due to the long-standing outcomes of nanosecond lasers, both clinically and histologically. The first aesthetic picosecond laser available for this use was the PicoSure® laser system (755/532 nm. Now that a vast amount of research on its use has been conducted, we performed a comprehensive review of the literature to validate the continued application of the PicoSure® laser system for tattoo removal.Study design and methods: A PubMed search was conducted using the term "picosecond" combined with "laser", "dermatology", and "laser tattoo removal".Results: A total of 13 articles were identified, and ten of these met the inclusion criteria for this review. The majority of studies showed that picosecond lasers are an effective and safe treatment mode for the removal of tattoo pigments. Several studies also indicated potential novel applications of picosecond lasers in the removal of various tattoo pigments (eg, black, red, and yellow. Adverse effects were generally mild, such as transient hypopigmentation or blister formation, and were rarely more serious, such as scarring and/or textural change.Conclusion: Advancements in laser technologies and their application in cutaneous medicine have revolutionized the field of laser surgery. Computational modeling provides evidence that the optimal pulse durations for tattoo ink removal are in the picosecond domain. It is recommended that the PicoSure® laser system continue to be used for safe and effective tattoo removal, including for red and yellow pigments.Keywords: tattoo, removal, laser, picosecond 

  3. Laser fusion systems for industrial process heat. Third semiannual report

    International Nuclear Information System (INIS)

    Bates, F.J.; Denning, R.S.; Dykhuizen, R.C.; Goldthwaite, W.H.; Kok, K.D.; Skelton, J.C.

    1979-01-01

    This report concentrates not only on the design of the laser fusion system but also on the cost of this system and the costs of alternative sources of energy that are expected to be in competition with the laser fusion system. The absolute values of the cost of the laser fusion system are limited by the estimates of the cost of the components and subsystems making up the laser fusion energy station. The method used in calculating costs of the laser fusion and alternative systems are laid out in detail

  4. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  5. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  6. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  7. Laser surveillance system (LASSY)

    International Nuclear Information System (INIS)

    Boeck, H.; Hammer, J.

    1988-01-01

    The development progress during the reporting period 1988 of the laser surveillance system of spent fuel pools is summarized. The present engineered system comes close to a final version for field application as all technical questions have been solved in 1988. 14 figs., 1 tab. (Author)

  8. Demo : an embedded vision system for high frame rate visual servoing

    NARCIS (Netherlands)

    Ye, Z.; He, Y.; Pieters, R.S.; Mesman, B.; Corporaal, H.; Jonker, P.P.

    2011-01-01

    The frame rate of commercial off-the-shelf industrial cameras is breaking the threshold of 1000 frames-per-second, the sample rate required in high performance motion control systems. On the one hand, it enables computer vision as a cost-effective feedback source; On the other hand, it imposes

  9. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  10. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  11. Active solution of homography for pavement crack recovery with four laser lines.

    Science.gov (United States)

    Xu, Guan; Chen, Fang; Wu, Guangwei; Li, Xiaotao

    2018-05-08

    An active solution method of the homography, which is derived from four laser lines, is proposed to recover the pavement cracks captured by the camera to the real-dimension cracks in the pavement plane. The measurement system, including a camera and four laser projectors, captures the projection laser points on the 2D reference in different positions. The projection laser points are reconstructed in the camera coordinate system. Then, the laser lines are initialized and optimized by the projection laser points. Moreover, the plane-indicated Plücker matrices of the optimized laser lines are employed to model the laser projection points of the laser lines on the pavement. The image-pavement homography is actively determined by the solutions of the perpendicular feet of the projection laser points. The pavement cracks are recovered by the active solution of homography in the experiments. The recovery accuracy of the active solution method is verified by the 2D dimension-known reference. The test case with the measurement distance of 700 mm and the relative angle of 8° achieves the smallest recovery error of 0.78 mm in the experimental investigations, which indicates the application potentials in the vision-based pavement inspection.

  12. Performance of a high repetition pulse rate laser system for in-gas-jet laser ionization studies with the Leuven laser ion source LISOL

    International Nuclear Information System (INIS)

    Ferrer, R.; Sonnenschein, V.T.; Bastin, B.; Franchoo, S.; Huyse, M.; Kudryavtsev, Yu.; Kron, T.; Lecesne, N.; Moore, I.D.; Osmond, B.; Pauwels, D.; Radulov, D.; Raeder, S.; Rens, L.

    2012-01-01

    The laser ionization efficiency of the Leuven gas cell-based laser ion source was investigated under on- and off-line conditions using two distinctly different laser setups: a low-repetition rate dye laser system and a high-repetition rate Ti:sapphire laser system. A systematic study of the ion signal dependence on repetition rate and laser pulse energy was performed in off-line tests using stable cobalt and copper isotopes. These studies also included in-gas-jet laser spectroscopy measurements on the hyperfine structure of 63 Cu. A final run under on-line conditions in which the radioactive isotope 59 Cu (T 1/2 = 81.5 s) was produced, showed a comparable yield of the two laser systems for in-gas-cell ionization. However, a significantly improved time overlap by using the high-repetition rate laser system for in-gas-jet ionization was demonstrated by an increase of the overall duty cycle, and at the same time, pointed to the need for a better shaped atomic jet to reach higher ionization efficiencies.

  13. Night vision: changing the way we drive

    Science.gov (United States)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  14. Development of a Laser Induced Fluorescence (LIF) System with a Tunable Diode Laser

    International Nuclear Information System (INIS)

    Woo, Hyun Jong; Do, Jeong Jun; You, Hyun Jong; Choi, Geun Sik; Lee, Myoung Jae; Chung, Kyu Sun

    2005-01-01

    The Laser Induced Fluorescence (LIF) is known as one of the most powerful techniques for measurements of ion velocity distribution function (IVDF) and ion temperature by means of Doppler broadening and Doppler shift. The dye lasers are generally used for LIF system with 611.66 nm (in vac.) for Ar ion, the low power diode laser was also proposed by Severn et al with the wavelength of 664.55 nm and 668.61 nm (in vac.) for Ar ion. Although the diode laser has the disadvantages of low power and small tuning range, it can be used for LIF system at the low temperature plasmas. A tunable diode laser with 668.614 nm of center wavelength and 10 GHz mode hop free tuning region has been used for our LIF system and it can be measured the ion temperature is up to 1 eV. The ion temperature and velocity distribution function have been measured with LaB6 plasma source, which is about 0.23 eV with Ar gas and 2.2 mTorr working pressure

  15. Comparison of three different laser systems for application in dentistry

    Science.gov (United States)

    Mindermann, Anja; Niemz, M. H.; Eisenmann, L.; Loesel, Frieder H.; Bille, Josef F.

    1993-12-01

    Three different laser systems have been investigated according to their possible application in dentistry: a free running and a Q-switched microsecond Ho:YAG laser, a free running microsecond Er:YAG laser and picosecond Nd:YLF laser system consisting of an actively mode locked oscillator and a regenerative amplifier. The experiments focused on the question if lasers can support or maybe replace ordinary drilling machines. For this purpose several cavities were generated with the lasers mentioned above. Their depth and quality were judged by light and electron microscopy. The results of the experiments showed that the picosecond Nd:YLF laser system has advantages compared to other lasers regarding their application in dentistry.

  16. A 1J LD pumped Nd:YAG pulsed laser system

    Science.gov (United States)

    Yi, Xue-bin; Wang, Bin; Yang, Feng; Li, Jing; Liu, Ya-Ping; Li, Hui-Jun; Wang, Yu; Chen, Ren

    2017-11-01

    A 1J LD pumped Nd;YAG pulsed laser was designed. The laser uses an oscillation and two-staged amplification structure, and applies diode bar integrated array as side-pump. The TEC temperature control device combing liquid cooling system is organized to control the temperature of the laser system. This study also analyzed the theoretical threshold of working material, the effect of thermal lens and the basic principle of laser amplification. The results showed that the laser system can achieve 1J, 25Hz pulse laser output, and the laser pulse can be output at two width: 6-7ns and 10ns, respectively, and the original beam angle is 1.2mrad. The laser system is characterized by small size, light weight, as well as good stability, which make it being applied in varied fields such as photovoltaic radar platform and etc

  17. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  18. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  19. Development of machine vision system for PHWR fuel pellet inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kamalesh Kumar, B.; Reddy, K.S.; Lakshminarayana, A.; Sastry, V.S.; Ramana Rao, A.V. [Nuclear Fuel Complex, Hyderabad, Andhra Pradesh (India); Joshi, M.; Deshpande, P.; Navathe, C.P.; Jayaraj, R.N. [Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh (India)

    2008-07-01

    Nuclear Fuel Complex, a constituent of Department of Atomic Energy; India is responsible for manufacturing nuclear fuel in India . Over a million Uranium-di-oxide pellets fabricated per annum need visual inspection . In order to overcome the limitations of human based visual inspection, NFC has undertaken the development of machine vision system. The development involved designing various subsystems viz. mechanical and control subsystem for handling and rotation of fuel pellets, lighting subsystem for illumination, image acquisition system, and image processing system and integration. This paper brings out details of various subsystems and results obtained from the trials conducted. (author)

  20. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  1. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  2. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    Science.gov (United States)

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  4. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    Science.gov (United States)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  5. Using Scenario Visioning and Participatory System Dynamics Modeling to Investigate the Future: Lessons from Minnesota 2050

    Directory of Open Access Journals (Sweden)

    Kathryn J. Draeger

    2010-08-01

    Full Text Available Both scenario visioning and participatory system dynamics modeling emphasize the dynamic and uncontrollable nature of complex socio-ecological systems, and the significance of multiple feedback mechanisms. These two methodologies complement one another, but are rarely used together. We partnered with regional organizations in Minnesota to design a future visioning process that incorporated both scenarios and participatory system dynamics modeling. The three purposes of this exercise were: first, to assist regional leaders in making strategic decisions that would make their communities sustainable; second, to identify research gaps that could impede the ability of regional and state groups to plan for the future; and finally, to introduce more systems thinking into planning and policy-making around environmental issues. We found that scenarios and modeling complemented one another, and that both techniques allowed regional groups to focus on the sustainability of fundamental support systems (energy, food, and water supply. The process introduced some creative tensions between imaginative scenario visioning and quantitative system dynamics modeling, and between creating desired futures (a strong cultural norm and inhabiting the future (a premise of the Minnesota 2050 exercise. We suggest that these tensions can stimulate more agile, strategic thinking about the future.

  6. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  7. Welding of Thin Steel Plates by Hybrid Welding Process Combined TIG Arc with YAG Laser

    Science.gov (United States)

    Kim, Taewon; Suga, Yasuo; Koike, Takashi

    TIG arc welding and laser welding are used widely in the world. However, these welding processes have some advantages and problems respectively. In order to improve problems and make use of advantages of the arc welding and the laser welding processes, hybrid welding process combined the TIG arc with the YAG laser was studied. Especially, the suitable welding conditions for thin steel plate welding were investigated to obtain sound weld with beautiful surface and back beads but without weld defects. As a result, it was confirmed that the shot position of the laser beam is very important to obtain sound welds in hybrid welding. Therefore, a new intelligent system to monitor the welding area using vision sensor is constructed. Furthermore, control system to shot the laser beam to a selected position in molten pool, which is formed by TIG arc, is constructed. As a result of welding experiments using these systems, it is confirmed that the hybrid welding process and the control system are effective on the stable welding of thin stainless steel plates.

  8. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  9. Evolution of shiva laser alignment systems

    International Nuclear Information System (INIS)

    Boyd, R.D.

    1980-07-01

    The Shiva oscillator pulse is preamplified and divided into twenty beams. Each beam is then amplified, spatially filtered, directed, and focused onto a target a few hundred micrometers in size producing optical intensities up to 10 16 W/cm 2 . The laser was designed and built with three automatic alignment systems: the oscillator alignment system, which aligns each of the laser's three oscillators to a reference beamline; the chain input pointing system, which points each beam into its respective chain; and the chain output pointing, focusing and centering system which points, centers and focuses the beam onto the target. Recently the alignment of the laser's one hundred twenty spatial filter pinholes was also automated. This system uses digitized video images of back-illuminated pinholes and computer analysis to determine current positions. The offset of each current position from a desired center point is then translated into stepper motor commands and the pinhole is moved the proper distance. While motors for one pinhole are moving, the system can digitize, analyze, and send commands to other motors, allowing the system to efficiently align several pinholes in parallel

  10. A low-cost machine vision system for the recognition and sorting of small parts

    Science.gov (United States)

    Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.

    2018-04-01

    An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.

  11. Laser puncture therapy of nervous system disorders

    Energy Technology Data Exchange (ETDEWEB)

    Anishchenko, G.; Kochetkov, V.

    1984-08-29

    The authors discuss experience with treatment of nervous system disorders by means of laser-puncture therapy. Commenting on the background of the selection of this type of treatment, they explain that once researchers determined the biological action of laser light on specific nerve receptors of the skin, development of laser apparatus capable of concentrating the beam in the millimeter band was undertaken. The devices that are being used for laser-puncture are said to operate in the red helium-neon band of light. The authors identify beam parameters that have been selected for different groups of acupuncture points of the skin, and the courses of treatment (in seconds of radiation) and their time intervals. They go on to discuss the results of treatment of over 800 patients categorized in a group with disorders of the peripheral nervous system and a second group with disorders of the central nervous system.

  12. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  13. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  14. Fiber laser front end for high energy petawatt laser systems

    International Nuclear Information System (INIS)

    Dawson, J W; Messerly, M J; Phan, H; Mitchell, S; Drobshoff, A; Beach, R J; Siders, C; Lucianetti, A; Crane, J K; Barty, C J

    2006-01-01

    We are developing a fiber laser front end suitable for high energy petawatt laser systems on large glass lasers such as NIF. The front end includes generation of the pulses in a fiber mode-locked oscillator, amplification and pulse cleaning, stretching of the pulses to >3ns, dispersion trimming, timing, fiber transport of the pulses to the main laser bay and amplification of the pulses to an injection energy of 150 (micro)J. We will discuss current status of our work including data from packaged components. Design detail such as how the system addresses pulse contrast, dispersion trimming and pulse width adjustment and impact of B-integral on the pulse amplification will be discussed. A schematic of the fiber laser system we are constructing is shown in figure 1 below. A 40MHz packaged mode-locked fiber oscillator produces ∼1nJ pulses which are phase locked to a 10MHz reference clock. These pulses are down selected to 100kHz and then amplified while still compressed. The amplified compressed pulses are sent through a non-linear polarization rotation based pulse cleaner to remove background amplified spontaneous emission (ASE). The pulses are then stretched by a chirped fiber Bragg grating (CFBG) and then sent through a splitter. The splitter splits the signal into two beams. (From this point we follow only one beam as the other follows an identical path.) The pulses are sent through a pulse tweaker that trims dispersion imbalances between the final large optics compressor and the CFBG. The pulse tweaker also permits the dispersion of the system to be adjusted for the purpose of controlling the final pulse width. Fine scale timing between the two beam lines can also be adjusted in the tweaker. A large mode area photonic crystal single polarization fiber is used to transport the pulses from the master oscillator room to the main laser bay. The pulses are then amplified a two stage fiber amplifier to 150mJ. These pulses are then launched into the main amplifier

  15. A novel diode laser system for photodynamic therapy

    DEFF Research Database (Denmark)

    Samsøe, E.; Andersen, P. E.; Petersen, P.

    2001-01-01

    In this paper a novel diode laser system for photodynamic therapy is demonstrated. The system is based on linear spatial filtering and optical phase conjugate feedback from a photorefractive BaTiO3 crystal. The spatial coherence properties of the diode laser are significantly improved. The system...

  16. Development of portable laser peening systems for nuclear power reactors

    International Nuclear Information System (INIS)

    Chida, Itaru; Uehara, Takuya; Yoda, Masaki; Miyasaka, Hiroyuki; Kato, Hiromi

    2009-01-01

    Stress corrosion cracking (SCC) is the major factor to reduce the reliability of aged reactor components. Toshiba has developed various laser-based maintenance and repair technologies and applied them to existing nuclear power plants. Laser-based technology is considered to be the best tool for remote processing in nuclear power plants, and particularly so for the maintenance and repair of reactor core components. Accessibility could be drastically improved by a simple handling system owing to the absence of reactive force against laser irradiation and the flexible optical fiber. For the preventive maintenance, laser peening technology was developed and applied to reactor components in operating BWRs and PWRs. Laser peening is a novel process to improve residual stress from tensile to compressive on material surface layer by irradiating focused high-power laser pulses in water without any surface preparations. Laser peening systems, which deliver laser pulses with mirrors or through an optical fiber, were developed and have been applied to preventive maintenance against SCC in nuclear power reactors since 1999. Each system was composed of laser oscillators, a beam delivery system, a laser irradiation head, remote handling equipment and a monitor/control system. Beam delivery with mirrors was accomplished through alignment/tracking functions with sufficient accuracy. Reliable fiber-delivery was attained by the development of a novel input coupling optics and an irradiation head with auto-focusing. Recently, we have developed portable laser peening (PLP) system which could employ both mirror- and fiber- delivery technologies. Size and weight of the PLP system for BWR bottom was almost 1/25 compared to the previous system. PLP system would be the applicable to both BWRs and PWRs as one of the maintenance technologies. (author)

  17. Laser systems with diamond optical elements

    International Nuclear Information System (INIS)

    Seitz, J.R.

    1975-01-01

    High power laser systems with optical elements of diamond having a thermal conductivity of at least 10 W/cm. 0 K at 300 0 K and an optical absorption at the laser beam wavelength of no more than 10 to 20 percent are described. (U.S.)

  18. High-power green diode laser systems for biomedical applications

    DEFF Research Database (Denmark)

    Müller, André

    propagation parameters and therefore efficiently increases the brightness of compact and cost-effective diode laser systems. The condition of overlapping beams is an ideal scenario for subsequent frequency conversion. Based on sum-frequency generation of two beam combined diode lasers a 3.2 fold increase...... output power of frequency doubled single emitters is limited by thermal effects potentially resulting in laser degradation and failure. In this work new concepts for power scaling of visible diode laser systems are introduced that help to overcome current limitations and enhance the application potential....... Implementing the developed concept of frequency converted, beam combined diode laser systems will help to overcome the high pump thresholds for ultrabroad bandwidth titanium sapphire lasers, leading towards diode based high-resolution optical coherence tomography with enhanced image quality. In their entirety...

  19. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  20. Infrared laser scattering system for the plasma diagnostics

    International Nuclear Information System (INIS)

    Hiraki, Naoji; Kawasaki, Shoji; Muraoka, Katsunori

    1975-01-01

    As the results of the parametric studies of the double discharge TEA CO 2 laser, the required properties on the laser system for the scattering diagnostics of plasmas are shown to be realized with our CO 2 laser. The direction of the future improvements of the laser performance is also discussed. (auth.)

  1. Chaotic dynamics and chaos control in nonlinear laser systems

    International Nuclear Information System (INIS)

    Fang Jinqing; Yao Weiguang

    2001-01-01

    Chaotic dynamics and chaos control have become a great challenge in nonlinear laser systems and its advances are reviewed mainly based on the ring cavity laser systems. The principle and stability conditions for time-delay feedback control are analyzed and applied to chaos control in the laser systems. Other advanced methods of chaos control, such as weak spatial perturbation and occasional proportional feedback technique, are discussed. Prospects of chaos control for application (such as improvement of laser power and performance, synchronized chaos secure communication and information processing) are pointed out finally

  2. Improvement of laser irradiation uniformity in GEKKO XII glass laser system

    International Nuclear Information System (INIS)

    Miyanaga, Noriaki; Matsuoka, Shinichi; Ando, Akinobu; Amano, Shinji; Nakatsuka, Masahiro; Kanabe, Tadashi; Jitsuno, Takahisa; Nakai, Sadao

    1995-01-01

    The uniform laser irradiation is one of key issues in the direct drive laser fusion research. The several key technologies for the uniform laser irradiation are reported. This paper includes the uniformity performance as a result of the introduction of the random phase plate, the partially coherent light and the beam smoothing by spectral dispersion into the New Gekko XI glass laser system. Finally the authors summarize the overall irradiation uniformity on the spherical target surface by considering the power imbalance effect. The technologies developed for the beam smoothing and the power balance control enable them to achieve the irradiation nonuniformities of around 1% level for a foot pulse and of a few % for a main drive pulse, respectively

  3. Investigation of dye laser excitation of atomic systems

    International Nuclear Information System (INIS)

    Abate, J.A.

    1977-01-01

    A stabilized cw dye laser system and an optical pumping scheme for a sodium atomic beam were developed, and the improvements over previously existing systems are discussed. A method to stabilize both the output intensity and the frequency of the cw dye laser for periods of several hours is described. The fluctuation properties of this laser are investigated by photon counting and two-time correlation measurements. The results show significant departures from the usual single-mode laser theory in the region of threshold and below. The implications of the deviation from accepted theory are discussed. The atomic beam system that was constructed and tested is described. A method of preparing atomic sodium so that it behaves as a simple two-level atom is outlined, and the results of some experiments to study the resonant interaction between the atoms and the dye laser beam are presented

  4. Artificial Vision, New Visual Modalities and Neuroadaptation

    Directory of Open Access Journals (Sweden)

    Hilmi Or

    2012-01-01

    Full Text Available To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known system includes Gabor filter and Gabor patch which work on edge perception, describing the visual perception in the best known way. These systems are used today in industry and technology of machines, robots and computers to provide their "seeing". These definitions are used beyond the machinery in humans for neuroadaptation in new visual modalities after some eye surgeries or to improve the quality of some already known visual modalities. Beside this, “the blindsight” -which was not known to exist until 35 years ago - can be stimulated with visual exercises. Gabor system is a description of visual perception definable in machine vision as well as in human visual perception. This system is used today in robotic vision. There are new visual modalities which arise after some eye surgeries or with the use of some visual optical devices. Also, blindsight is a different visual modality starting to be defined even though the exact etiology is not known. In all the new visual modalities, new vision stimulating therapies using the Gabor systems can be applied. (Turk J Oph thal mol 2012; 42: 61-5

  5. A digital intensity stabilization system for HeNe laser

    Science.gov (United States)

    Wei, Zhimeng; Lu, Guangfeng; Yang, Kaiyong; Long, Xingwu; Huang, Yun

    2012-02-01

    A digital intensity stabilization system for HeNe laser is developed. Based on a switching power IC to design laser power supply and a general purpose microcontroller to realize digital PID control, the system constructs a closed loop to stabilize the laser intensity by regulating its discharge current. The laser tube is made of glass ceramics and its integrated structure is steady enough to eliminate intensity fluctuations at high frequency and attenuates all intensity fluctuations, and this makes it easy to tune the control loop. The control loop between discharge current and photodiode voltage eliminates the long-term drifts. The intensity stability of the HeNe laser with this system is 0.014% over 12 h.

  6. Phase Noise Comparision of Short Pulse Laser Systems

    Energy Technology Data Exchange (ETDEWEB)

    S. Zhang; S. V. Benson; J. Hansknecht; D. Hardy; G. Neil; Michelle D. Shinn

    2006-12-01

    This paper describes the phase noise measurement on several different mode-locked laser systems that have completely different gain media and configurations including a multi-kW free-electron laser. We will focus on the state of the art short pulse lasers, especially the drive lasers for photocathode injectors. A comparison between the phase noise of the drive laser pulses, electron bunches and FEL pulses will also be presented.

  7. Excimer laser beam delivery systems for medical applications

    Science.gov (United States)

    Kubo, Uichi; Hashishin, Yuichi; Okada, Kazuyuki; Tanaka, Hiroyuki

    1993-05-01

    We have been doing the basic experiments of UV laser beams and biotissue interaction with both KrF and XeCl lasers. However, the conventional optical fiber can not be available for power UV beams. So we have been investigating about UV power beam delivery systems. These experiments carry on with the same elements doped quartz fibers and the hollow tube. The doped elements are OH ion, chlorine and fluorine. In our latest work, we have tried ArF excimer laser and biotissue interactions, and the beam delivery experiments. From our experimental results, we found that the ArF laser beam has high incision ability for hard biotissue. For example, in the case of the cow's bone incision, the incision depth by ArF laser was ca.15 times of KrF laser. Therefore, ArF laser would be expected to harden biotissue therapy as non-thermal method. However, its beam delivery is difficult to work in this time. We will develop ArF laser beam delivery systems.

  8. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    International Nuclear Information System (INIS)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik

    2016-01-01

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages

  9. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages.

  10. Low Vision

    Science.gov (United States)

    ... USAJobs Home » Statistics and Data » Low Vision Listen Low Vision Low Vision Defined: Low Vision is defined as the best- ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  11. Wavelength stabilized multi-kW diode laser systems

    Science.gov (United States)

    Köhler, Bernd; Unger, Andreas; Kindervater, Tobias; Drovs, Simon; Wolf, Paul; Hubrich, Ralf; Beczkowiak, Anna; Auch, Stefan; Müntz, Holger; Biesenbach, Jens

    2015-03-01

    We report on wavelength stabilized high-power diode laser systems with enhanced spectral brightness by means of Volume Holographic Gratings. High-power diode laser modules typically have a relatively broad spectral width of about 3 to 6 nm. In addition the center wavelength shifts by changing the temperature and the driving current, which is obstructive for pumping applications with small absorption bandwidths. Wavelength stabilization of high-power diode laser systems is an important method to increase the efficiency of diode pumped solid-state lasers. It also enables power scaling by dense wavelength multiplexing. To ensure a wide locking range and efficient wavelength stabilization the parameters of the Volume Holographic Grating and the parameters of the diode laser bar have to be adapted carefully. Important parameters are the reflectivity of the Volume Holographic Grating, the reflectivity of the diode laser bar as well as its angular and spectral emission characteristics. In this paper we present detailed data on wavelength stabilized diode laser systems with and without fiber coupling in the spectral range from 634 nm up to 1533 nm. The maximum output power of 2.7 kW was measured for a fiber coupled system (1000 μm, NA 0.22), which was stabilized at a wavelength of 969 nm with a spectral width of only 0.6 nm (90% value). Another example is a narrow line-width diode laser stack, which was stabilized at a wavelength of 1533 nm with a spectral bandwidth below 1 nm and an output power of 835 W.

  12. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Directory of Open Access Journals (Sweden)

    Došen Strahinja

    2010-08-01

    Full Text Available Abstract Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand. The controller, termed cognitive vision system (CVS, mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1 the user triggers the system and controls the orientation of the hand; 2 a high-level controller automatically selects the grasp type and size; and 3 an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only. Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties and autonomous decision making (i.e., selecting the grasp type and

  13. Vision-Based System for Human Detection and Tracking in Indoor Environment

    OpenAIRE

    Benezeth , Yannick; Emile , Bruno; Laurent , Hélène; Rosenberger , Christophe

    2010-01-01

    International audience; In this paper, we propose a vision-based system for human detection and tracking in indoor environment using a static camera. The proposed method is based on object recognition in still images combined with methods using temporal information from the video. Doing that, we improve the performance of the overall system and reduce the task complexity. We first use background subtraction to limit the search space of the classifier. The segmentation is realized by modeling ...

  14. Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Science.gov (United States)

    Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack

    2018-01-01

    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost

  15. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  16. The copper-pumped dye laser system at Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Hackel, R.P.; Warner, B.E.

    1993-01-01

    The Lawrence Livermore National Laboratory's (LLNL) Atomic Vapor Laser Isotope Separation (AVLIS) Program has developed a high-average-power, pulsed, tunable, visible laser system. Testing of this hardware is in progress at industrial scale. The LLNL copper-dye laser system is prototypical of a basic module of a uranium-AVLIS plant. The laser demonstration facility (LDF) system consists of copper vapor lasers arranged in oscillator-amplifier chains providing optical pump power to dye-laser master-oscillator-power-amplifier chains. This system is capable of thousands of watts (average) tunable between 550 and 650 mm. The copper laser system at LLNL consists of 12 chains operating continuously. The copper lasers operate at nominally 4.4 kHz, with 50 ns pulse widths and produce 20 W at near the diffraction limit from oscillators and >250 W from each amplifier. Chains consist of an oscillator and three amplifiers and produce >750 W average, with availabilities >95% (i.e., >8,300 h/y). The total copper laser system power averages ∼9,000 W and has operated at over 10,000 W for extended intervals. The 12 copper laser beams are multiplexed and delivered to the dye laser system where they pump multiple dye laser chains. Each dye chain consists of a master oscillator and three or four power amplifiers. The master oscillator operates at nominally 100 mW with a 50 MHz single mode bandwidth. Amplifiers are designed to efficiently amplify the dye beam with low ASE content and high optical quality. Sustained dye chain powers are up to 1,400 W with dye conversion efficiency >50%, ASE content <5%, and wavefront quality correctable to <λ/10 RMS, using deformable mirrors. Since the timing of the copper laser chains can be offset, the dye laser system is capable of repetition rates which are multiples of 4.4 kHz, up to 26 kHz, limited by the dye pumping system. Development of plant-scale copper and dye laser hardware is progressing in off-line facilities

  17. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  18. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  19. Research on laser cladding control system based on fuzzy PID

    Science.gov (United States)

    Zhang, Chuanwei; Yu, Zhengyang

    2017-12-01

    Laser cladding technology has a high demand for control system, and the domestic laser cladding control system mostly uses the traditional PID control algorithm. Therefore, the laser cladding control system has a lot of room for improvement. This feature is suitable for laser cladding technology, Based on fuzzy PID three closed-loop control system, and compared with the conventional PID; At the same time, the laser cladding experiment and friction and wear experiment were carried out under the premise of ensuring the reasonable control system. Experiments show that compared with the conventional PID algorithm in fuzzy the PID algorithm under the surface of the cladding layer is more smooth, the surface roughness increases, and the wear resistance of the cladding layer is also enhanced.

  20. High-Voltage Power Supply System for Laser Isotope Separation

    Energy Technology Data Exchange (ETDEWEB)

    Ketaily, E.C.; Buckner, R.P.; Uhrik, R.L.

    1979-06-26

    This report presents several concepts for Laser High-Voltage Power Supply (HVPS) Systems for a Laser Isotope Separation facility. Selection of equipments and their arrangement into operational systems is based on proven designs and on application concepts now being developed. This report has identified a number of alternative system arrangements and has provided preliminary cost estimates for each. The report includes a recommendation for follow-on studies that will further define the optimum Laser HVPS Systems. Brief descriptions are given of Modulator/Regulator circuit trade-offs, system control interfaces, and their impact on costs.

  1. High-Voltage Power Supply System for Laser Isotope Separation

    International Nuclear Information System (INIS)

    Ketaily, E.C.; Buckner, R.P.; Uhrik, R.L.

    1979-01-01

    This report presents several concepts for Laser High-Voltage Power Supply (HVPS) Systems for a Laser Isotope Separation facility. Selection of equipments and their arrangement into operational systems is based on proven designs and on application concepts now being developed. This report has identified a number of alternative system arrangements and has provided preliminary cost estimates for each. The report includes a recommendation for follow-on studies that will further define the optimum Laser HVPS Systems. Brief descriptions are given of Modulator/Regulator circuit trade-offs, system control interfaces, and their impact on costs

  2. Aurora multikilojoule KrF laser system prototype for inertial confinement fusion

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Hanlon, J.A.; Mc Leod, J.; Kang, M.; Kortegaard, B.L.; Burrows, M.D.; Bowling, P.S.

    1987-01-01

    Aurora is the Los Alamos National Laboratory short-pulse, high-power, KrF laser system. It serves as an end-to-end technology demonstration for large-scale ultraviolet laser systems of interest for short wavelength, inertial confinement fusion (ICF) investigations. The systems is a prototype for using optical angular multiplexing and serial amplification by large electron-beam-driven KrF laser amplifiers to deliver stacked, 248-nm, 5-ns duration multikilojoule laser pulses to ICF targets using an --1-km-long optical beam path. The entire Aurora KrF laser system is described and the design features of the following major system components are summarized: front-end lasers, amplifier train, multiplexer, optical relay train, demultiplexer, target irradiation apparatus, and alignment and controls systems

  3. VISION: a Versatile and Innovative SIlicOn tracking system

    CERN Document Server

    Lietti, Daniela; Vallazza, Erik

    This thesis work focuses on the study of the performance of different tracking and profilometry systems (the so-called INSULAB, INSUbria LABoratory, and VISION, Versatile and Innovative SIlicON, Telescopes) used in the last years by the NTA-HCCC, the COHERENT (COHERENT effects in crystals for the physics of accelerators), ICE-RAD (Interaction in Crystals for Emission of RADiation) and CHANEL (CHAnneling of NEgative Leptons) experiments, four collaborations of the INFN (Istituto Nazionale di Fisica Nucleare) dedicated to the research in the crystals physics field.

  4. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  5. Vision restoration after brain and retina damage: the "residual vision activation theory".

    Science.gov (United States)

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  6. PEP Laser Surveying System

    International Nuclear Information System (INIS)

    Lauritzen, T.; Sah, R.C.

    1979-03-01

    A Laser Surveying System has been developed to survey the beam elements of the PEP storage ring. This system provides automatic data acquisition and analysis in order to increase survey speed and to minimize operator error. Two special instruments, the Automatic Readout Micrometer and the Small Automatic Micrometer, have been built for measuring the locations of fiducial points on beam elements with respect to the light beam from a laser. These instruments automatically encode offset distances and read them into the memory of an on-line computer. Distances along the beam line are automatically encoded with a third instrument, the Automatic Readout Tape Unit. When measurements of several beam elements have been taken, the on-line computer analyzes the measured data, compared them with desired parameters, and calculates the required adjustments to beam element support stands

  7. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  8. The use of contact lens telescopic systems in low vision rehabilitation.

    Science.gov (United States)

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  9. Stability design considerations for mirror support systems in ICF lasers

    International Nuclear Information System (INIS)

    Tietbohl, G.L.; Sommer, S.C.

    1996-10-01

    Some of the major components of laser systems used for Inertial Confinement Fusion (ICF) are the large aperture mirrors which direct the path of the laser. These mirrors are typically supported by systems which consist of mirror mounts, mirror enclosures, superstructures, and foundations. Stability design considerations for the support systems of large aperture mirrors have been developed based on the experience of designing and evaluating similar systems at the Lawrence Livermore National Laboratory (LLNL). Examples of the systems developed at LLNL include Nova, the Petawatt laser, Beamlet, and the National Ignition Facility (NIF). The structural design of support systems of large aperture mirrors has typically been controlled by stability considerations in order for the large laser system to meet its performance requirements for alignment and positioning. This paper will discuss the influence of stability considerations and will provide guidance on the structural design and evaluation of mirror support systems in ICF lasers so that this information can be used on similar systems

  10. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    The goal of this thesis is to investigate computer vision methods for timber harvesting operations. The background for developing computer vision for timber harvesting is to document origin of timber and to collect qualitative and quantitative parameters concerning the timber for efficient harvest...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the development of the logTracker system the described methods have a general applicability making them useful for many other computer vision problems....

  11. Five Wavelength DFB Fibre Laser Source for WDM Systems

    DEFF Research Database (Denmark)

    Hübner, Jörg; Varming, Poul; Kristensen, Martin

    1997-01-01

    Singlemode UV-induced distributed feedback (DFB) fibre lasers with a linewidth of lasers is verified by a 10 Gbit/s transmission experiment. Five DFB fibre lasers are cascaded and pumped by a single...... semiconductor laser, thereby forming a multiwavelength source for WDM systems...

  12. A neural network based artificial vision system for licence plate recognition.

    Science.gov (United States)

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  13. Railgun system using a laser-induced plasma armature

    International Nuclear Information System (INIS)

    Onozuka, M.; Oda, Y.; Azuma, K.

    1996-01-01

    Development of an electromagnetic railgun system that utilizes a laser-induced plasma armature formation has been conducted to investigate the application of the railgun system for high-speed pellet injection into fusion plasmas. Using the laser-induced plasma formation technique, the required breakdown voltage was reduced by one-tenth compared with that for the spark-discharged plasma. The railgun system successfully accelerated the laser-induced plasma armature by an electromagnetic force that accelerated the pellet. The highest velocity of the solid hydrogen pellets, obtained so far, was 2.6 km/sec using a 2m-long railgun. copyright 1996 American Institute of Physics

  14. Railgun system using a laser-induced plasma armature

    Science.gov (United States)

    Onozuka, Masanori; Oda, Yasushi; Azuma, Kingo

    1996-05-01

    Development of an electromagnetic railgun system that utilizes a laser-induced plasma armature formation has been conducted to investigate the application of the railgun system for high-speed pellet injection into fusion plasmas. Using the laser-induced plasma formation technique, the required breakdown voltage was reduced by one-tenth compared with that for the spark-discharged plasma. The railgun system successfully accelerated the laser-induced plasma armature by an electromagnetic force that accelerated the pellet. The highest velocity of the solid hydrogen pellets, obtained so far, was 2.6 km/sec using a 2m-long railgun.

  15. Machine Vision Tests for Spent Fuel Scrap Characteristics

    International Nuclear Information System (INIS)

    BERGER, W.W.

    2000-01-01

    The purpose of this work is to perform a feasibility test of a Machine Vision system for potential use at the Hanford K basins during spent nuclear fuel (SNF) operations. This report documents the testing performed to establish functionality of the system including quantitative assessment of results. Fauske and Associates, Inc., which has been intimately involved in development of the SNF safety basis, has teamed with Agris-Schoen Vision Systems, experts in robotics, tele-robotics, and Machine Vision, for this work

  16. Laser systems for ablative fractional resurfacing

    DEFF Research Database (Denmark)

    Paasch, Uwe; Haedersdal, Merete

    2011-01-01

    of a variety of skin conditions, primarily chronically photodamaged skin, but also acne and burn scars. In addition, it is anticipated that AFR can be utilized in the laser-assisted delivery of topical drugs. Clinical efficacy coupled with minimal downtime has driven the development of various fractional...... ablative laser systems. Fractionated CO(2) (10,600-nm), erbium yttrium aluminum garnet, 2940-nm and yttrium scandium gallium garnet, 2790-nm lasers are available. In this article, we present an overview of AFR technology, devices and histopathology, and we summarize the current clinical possibilities...

  17. Laser systems for ablative fractional resurfacing

    DEFF Research Database (Denmark)

    Paasch, Uwe; Haedersdal, Merete

    2011-01-01

    ablative laser systems. Fractionated CO(2) (10,600-nm), erbium yttrium aluminum garnet, 2940-nm and yttrium scandium gallium garnet, 2790-nm lasers are available. In this article, we present an overview of AFR technology, devices and histopathology, and we summarize the current clinical possibilities...... of a variety of skin conditions, primarily chronically photodamaged skin, but also acne and burn scars. In addition, it is anticipated that AFR can be utilized in the laser-assisted delivery of topical drugs. Clinical efficacy coupled with minimal downtime has driven the development of various fractional...

  18. Laser rangefinders for autonomous intelligent cruise control systems

    Science.gov (United States)

    Journet, Bernard A.; Bazin, Gaelle

    1998-01-01

    THe purpose of this paper is to show to what kind of application laser range-finders can be used inside Autonomous Intelligent Cruise Control systems. Even if laser systems present good performances the safety and technical considerations are very restrictive. As the system is used in the outside, the emitted average output power must respect the rather low level of 1A class. Obstacle detection or collision avoidance require a 200 meters range. Moreover bad weather conditions, like rain or fog, ar disastrous. We have conducted measurements on laser rangefinder using different targets and at different distances. We can infer that except for cooperative targets low power laser rangefinder are not powerful enough for long distance measurement. Radars, like 77 GHz systems, are better adapted to such cases. But in case of short distances measurement, range around 10 meters, with a minimum distance around twenty centimeters, laser rangefinders are really useful with good resolution and rather low cost. Applications can have the following of white lines on the road, the target being easily cooperative, detection of vehicles in the vicinity, that means car convoy traffic control or parking assistance, the target surface being indifferent at short distances.

  19. Fiber laser master oscillators for optical synchronization systems

    International Nuclear Information System (INIS)

    Winter, A.

    2008-04-01

    New X-ray free electron lasers (e.g. the European XFEL) require a new generation of synchronization system to achieve a stability of the FEL pulse, such that pump-probe experiments can fully utilize the ultra-short pulse duration (50 fs). An optical synchronization system has been developed based on the distribution of sub-ps optical pulses in length-stabilized fiber links. The synchronization information is contained in the precise repetition frequency of the optical pulses. In this thesis, the design and characterization of the laser serving as laser master oscillator is presented. An erbium-doped mode-locked fiber laser was chosen. Amplitude and phase noise were measured and record-low values of 0.03 % and 10 fs for the frequency range of 1 kHz to the Nyquist frequency were obtained. Furthermore, an initial proof-of-principle experiment for the optical synchronization system was performed in an accelerator environment. In this experiment, the fiber laser wase phase-locked to a microwave reference oscillator and a 500 meter long fiber link was stabilized to 12 fs rms over a range of 0.1 Hz to 20 kHz. RF signals were obtained from a photodetector without significant degradation at the end of the link. Furthermore, the laser master oscillator for FLASH was designed and is presently in fabrication and the initial infrastructure for the optical synchronization system was setup. (orig.)

  20. Fiber laser master oscillators for optical synchronization systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, A.

    2008-04-15

    New X-ray free electron lasers (e.g. the European XFEL) require a new generation of synchronization system to achieve a stability of the FEL pulse, such that pump-probe experiments can fully utilize the ultra-short pulse duration (50 fs). An optical synchronization system has been developed based on the distribution of sub-ps optical pulses in length-stabilized fiber links. The synchronization information is contained in the precise repetition frequency of the optical pulses. In this thesis, the design and characterization of the laser serving as laser master oscillator is presented. An erbium-doped mode-locked fiber laser was chosen. Amplitude and phase noise were measured and record-low values of 0.03 % and 10 fs for the frequency range of 1 kHz to the Nyquist frequency were obtained. Furthermore, an initial proof-of-principle experiment for the optical synchronization system was performed in an accelerator environment. In this experiment, the fiber laser wase phase-locked to a microwave reference oscillator and a 500 meter long fiber link was stabilized to 12 fs rms over a range of 0.1 Hz to 20 kHz. RF signals were obtained from a photodetector without significant degradation at the end of the link. Furthermore, the laser master oscillator for FLASH was designed and is presently in fabrication and the initial infrastructure for the optical synchronization system was setup. (orig.)

  1. Development of a vision-based pH reading system

    Science.gov (United States)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  2. Multitube coaxial closed cycle gas laser system

    International Nuclear Information System (INIS)

    Davis, J.W.; Walch, A.P.

    1975-01-01

    A gas laser design capable of long term reliable operation in a commercial environment is disclosed. Various construction details which insulate the laser optics from mechanical distortions and vibrations inevitably present in the environment are developed. Also, a versatile optical cavity made up of modular units which render the basic laser configuration adaptable to alternate designs with different output capabilities is shown in detail. The system built around a convection laser operated in a closed cycle and the working medium is a gas which is excited by direct current electric discharges. (auth)

  3. Infrared laser scattering system for plasma diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Muraoka, K; Hiraki, N; Kawasaki, S [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1975-05-01

    The possibility of observing the collective scattering of infrared laser light from plasmas is discussed in terms of the laser power requirement, the necessary optical system and the detector performance, and is shown to be feasible with the present day techniques to get the ion temperature by means of a CO/sub 2/ laser on theta pinch plasmas. Based on this estimate, the construction of the TEA CO/sub 2/ laser and the preparations of the optical components have been started and some preliminary results of these are described.

  4. Infrared laser scattering system for plasma diagnostics

    International Nuclear Information System (INIS)

    Muraoka, Katsunori; Hiraki, Naoji; Kawasaki, Shoji

    1975-01-01

    The possibility of observing the collective scattering of infrared laser light from plasmas is discussed in terms of the laser power requirement, the necessary optical system and the detector performance, and is shown to be feasible with the present day techniques to get the ion temperature by means of a CO 2 laser on theta pinch plasmas. Based on this estimate, the construction of the TEA CO 2 laser and the preparations of the optical components have been started and some preliminary results of these are described. (auth.)

  5. Infrared machine vision system for the automatic detection of olive fruit quality.

    Science.gov (United States)

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  6. Comparison of the Infiniti vision and the series 20,000 Legacy systems.

    Science.gov (United States)

    Fernández de Castro, Luis E; Solomon, Kerry D; Hu, Daniel J; Vroman, David T; Sandoval, Helga P

    2008-01-01

    To compare the efficiency of the Infiniti vision system and the Series 20,000 Legacy system phacoemulsification units during routine cataract extraction. Thirty-nine eyes of 39 patients were randomized to have their cataract removed using either the Infiniti or the Legacy system, both using the Neosonix handpiece. System settings were standardized. Ultrasound time, amount of balanced salt solution (BSS) used intraoperatively, and postoperative visual acuity at postoperative days 1, 7 and 30 were evaluated. Preoperatively, best corrected visual acuity was significantly worse in the Infiniti group compared to the Legacy group (0.38 +/- 0.23 and 0.21 +/- 0.16, respectively; p = 0.012). The mean phacoemulsification time was 39.6 +/- 22.9 s (range 6.0-102.0) for the Legacy group and 18.3 +/-19.1 s (range 1.0-80.0) for the Infiniti group (p = 0.001). The mean amounts of intraoperative BSS used were 117 +/- 37.7 ml (range 70-195) in the Legacy group and 85.3 +/- 38.9 ml (range 40-200) in the Infiniti group (p = 0.005). No differences in postoperative visual acuity were found. The ability to use higher flow rates and vacuum settings with the Infiniti vision system allowed for cataract removal with less phacoemulsification time than when using the Legacy system. Copyright 2008 S. Karger AG, Basel.

  7. Mid-IR laser system for advanced neurosurgery

    Science.gov (United States)

    Klosner, M.; Wu, C.; Heller, D. F.

    2014-03-01

    We present work on a laser system operating in the near- and mid-IR spectral regions, having output characteristics designed to be optimal for cutting various tissue types. We provide a brief overview of laser-tissue interactions and the importance of controlling certain properties of the light beam. We describe the principle of operation of the laser system, which is generally based on a wavelength-tunable alexandrite laser oscillator/amplifier, and multiple Raman conversion stages. This configuration provides robust access to the mid-IR spectral region at wavelengths, pulse energies, pulse durations, and repetition rates that are attractive for neurosurgical applications. We summarize results for ultra-precise selective cutting of nerve sheaths and retinas with little collateral damage; this has applications in procedures such as optic-nerve-sheath fenestration and possible spinal repair. We also report results for cutting cornea, and dermal tissues.

  8. New vision solar system mission study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  9. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  10. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  11. Commercialization plan laser-based decoating systems

    International Nuclear Information System (INIS)

    Freiwald, J.; Freiwald, D.A.

    1998-01-01

    F2 Associates Inc. (F2) is a small, high-technology firm focused on developing and commercializing environmentally friendly laser ablation systems for industrial-rate removal of surface coatings from metals, concrete, and delicate substrates such as composites. F2 has a contract with the US Department of Energy Federal Energy Technology Center (FETC) to develop and test a laser-based technology for removing contaminated paint and other contaminants from concrete and metal surfaces. Task 4.1 in Phase 2 of the Statement of Work for this DOE contract requires that F2 ''document its plans for commercializing and marketing the stationary laser ablation system. This document shall include a discussion of prospects for commercial customers and partners and may require periodic update to reflect changing strategy. This document shall be submitted to the DOE for review.'' This report is being prepared and submitted in fulfillment of that requirement. This report describes the laser-based technology for cleaning and coatings removal, the types of laser-based systems that have been developed by F2 based on this technology, and the various markets that are emerging for this technology. F2's commercialization and marketing plans are described, including how F2's organization is structured to meet the needs of technology commercialization, F2's strategy and marketing approach, and the necessary steps to receive certification for removing paint from aircraft and DOE certification for D and D applications. The future use of the equipment built for the DOE contract is also discussed

  12. A 1-kJ KrF laser system for laser fusion research

    International Nuclear Information System (INIS)

    Owadano, Y.; Okuda, I.; Tanimoto, M.; Matsumoto, Y.; Yaoita, A.; Komeiji, S.; Yano, M.

    1987-01-01

    Ultraviolet laser light has several advantages in coupling with a laser fusion target, and the KrF laser is considered to be a promising candidate for the driver because of its short wavelength, high overall efficiency, and scalability to a megajoule class system. The Electrotechnical Laboratory is developing a 1-kJ class KrF laser system to perform target-shooting experiments in the 10/sup 13/-10/sup 15/-W/cm/sup 2/, 10-20-ns range and to investigate the possibility of a compact laser fusion driver which operates at a high pumping density and high laser power density. Based on the pulsed-power technology used in Amp2 and the characteristics of the Kr-rich mixture measured, Amp3 was designed to operate at high optical power density with a Kr-rich mixture. Amp3 has four PFLs charged by a single 40-kJ Marx generator and four e-beam diodes (550 kV, 4 Ω) arranged cylindrically around the laser cell. The active volume is 660 cm/sup 2/ (29 cm in diameter) X 1 m, and 2-atm Kr is pumped at a density of 1.9 MW/cm/sup 3/. Output energy of 1 kJ is expected at an intrinsic efficiency of 8.3% and overall efficiency of 2.5%. Output energy fluence is 1.5 J/cm/sup 2/ (15 MW/cm/sup 2/) on average, which is lower than the damage threshold of our fully reflecting AR coatings (>3 J/cm/sup 2/)

  13. Vision and the hypothalamus.

    Science.gov (United States)

    Trachtman, Joseph N

    2010-02-01

    For nearly 2 millennia, signs of hypothalamic-related vision disorders have been noticed as illustrated by paintings and drawings of that time of undiagnosed Horner's syndrome. It was not until the 1800s, however, that specific connections between the hypothalamus and the vision system were discovered. With a fuller elaboration of the autonomic nervous system in the early to mid 1900s, many more pathways were discovered. The more recently discovered retinohypothalamic tracts show the extent and influence of light stimulation on hypothalamic function and bodily processes. The hypothalamus maintains its myriad connections via neural pathways, such as with the pituitary and pineal glands; the chemical messengers of the peptides, cytokines, and neurotransmitters; and the nitric oxide mechanism. As a result of these connections, the hypothalamus has involvement in many degenerative diseases. A complete feedback mechanism between the eye and hypothalamus is established by the retinohypothalamic tracts and the ciliary nerves innervating the anterior pole of the eye and the retina. A discussion of hypothalamic-related vision disorders includes neurologic syndromes, the lacrimal system, the retina, and ocular inflammation. Tables and figures have been used to aid in the explanation of the many connections and chemicals controlled by the hypothalamus. The understanding of the functions of the hypothalamus will allow the clinician to gain better insight into the many pathologies associated between the vision system and the hypothalamus. In the future, it may be possible that some ocular disease treatments will be via direct action on hypothalamic function. Copyright 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  14. Virtual expansion of the technical vision system for smart vehicles based on multi-agent cooperation model

    Science.gov (United States)

    Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay

    2017-12-01

    Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.

  15. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  16. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    Science.gov (United States)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  17. An All-Solid-State High Repetiton Rate Titanium:Sapphire Laser System For Resonance Ionization Laser Ion Sources

    Science.gov (United States)

    Mattolat, C.; Rothe, S.; Schwellnus, F.; Gottwald, T.; Raeder, S.; Wendt, K.

    2009-03-01

    On-line production facilities for radioactive isotopes nowadays heavily rely on resonance ionization laser ion sources due to their demonstrated unsurpassed efficiency and elemental selectivity. Powerful high repetition rate tunable pulsed dye or Ti:sapphire lasers can be used for this purpose. To counteract limitations of short pulse pump lasers, as needed for dye laser pumping, i.e. copper vapor lasers, which include high maintenance and nevertheless often only imperfect reliability, an all-solid-state Nd:YAG pumped Ti:sapphire laser system has been constructed. This could complement or even replace dye laser systems, eliminating their disadvantages but on the other hand introduce shortcomings on the side of the available wavelength range. Pros and cons of these developments will be discussed.

  18. A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing

    Science.gov (United States)

    Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.

    2017-12-01

    Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.

  19. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    International Nuclear Information System (INIS)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-01-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately. (paper)

  20. A Longitudinal Study on the Effects of Laser Refractive Eye Surgery in Military Aircrew

    National Research Council Canada - National Science Library

    Hinton, Patricia; Niall, Keith K; Wainberg, Dan; Bateman, Bill; Courchesne, Cyd; Gray, Gary; Quick, Gayle; Thatcher, Bob

    2005-01-01

    .... Postoperative low contrast acuity has improved with newer laser techniques but there was still concern that vision after laser eye surgery would be not good enough for military aircrew demands...

  1. A future vision of nuclear material information systems

    International Nuclear Information System (INIS)

    Suski, N.; Wimple, C.

    1999-01-01

    To address the current and future needs for nuclear materials management and safeguards information, Lawrence Livermore National Laboratory envisions an integrated nuclear information system that will support several functions. The vision is to link distributed information systems via a common communications infrastructure designed to address the information interdependencies between two major elements: Domestic, with information about specific nuclear materials and their properties, and International, with information pertaining to foreign nuclear materials, facility design and operations. The communication infrastructure will enable data consistency, validation and reconciliation, as well as provide a common access point and user interface for a broad range of nuclear materials information. Information may be transmitted to, from, and within the system by a variety of linkage mechanisms, including the Internet. Strict access control will be employed as well as data encryption and user authentication to provide the necessary information assurance. The system can provide a mechanism not only for data storage and retrieval, but will eventually provide the analytical tools necessary to support the U.S. government's nuclear materials management needs and non-proliferation policy goals

  2. Diagnosis System for Diabetic Retinopathy and Glaucoma Screening to Prevent Vision Loss

    Directory of Open Access Journals (Sweden)

    Siva Sundhara Raja DHANUSHKODI

    2014-03-01

    Full Text Available Aim: Diabetic retinopathy (DR and glaucoma are two most common retinal disorders that are major causes of blindness in diabetic patients. DR caused in retinal images due to the damage in retinal blood vessels, which leads to the formation of hemorrhages spread over the entire region of retina. Glaucoma is caused due to hypertension in diabetic patients. Both DR and glaucoma affects the vision loss in diabetic patients. Hence, a computer aided development of diagnosis system for Diabetic retinopathy and Glaucoma screening is proposed in this paper to prevent vision loss. Method: The diagnosis system of DR consists of two stages namely detection and segmentation of fovea and hemorrhages. The diagnosis system of glaucoma screening consists of three stages namely blood vessel segmentation, Extraction of optic disc (OD and optic cup (OC region and determination of rim area between OD and OC. Results: The specificity and accuracy for hemorrhages detection is found to be 98.47% and 98.09% respectively. The accuracy for OD detection is found to be 99.3%. This outperforms state-of-the-art methods. Conclusion: In this paper, the diagnosis system is developed to classify the DR and glaucoma screening in to mild, moderate and severe respectively.

  3. Traveling wave laser system

    International Nuclear Information System (INIS)

    Gregg, D.W.; Kidder, R.E.; Biehl, A.T.

    1975-01-01

    The invention broadly involves a method and means for generating a traveling wave laser pulse and is basically analogous to a single pass light amplifier system. However, the invention provides a traveling wave laser pulse of almost unlimited energy content, wherein a gain medium is pumped in a traveling wave mode, the traveling wave moving at essentially the velocity of light to generate an amplifying region or zone which moves through the medium at the velocity of light in the presence of directed stimulating radiation, thereby generating a traveling coherent, directed radiation pulse moving with the amplification zone through the gain medium. (U.S.)

  4. Alignment system for SGII-Up laser facility

    Science.gov (United States)

    Gao, Yanqi; Cui, Yong; Li, Hong; Gong, Lei; Lin, Qiang; Liu, Daizhong; Zhu, Baoqiang; Ma, Weixin; Zhu, Jian; Lin, Zunqi

    2018-03-01

    The SGII-Up laser facility in Shanghai is one of the most important high-power laser facilities in China. It is designed to obtain 24 kJ (3ω) of energy with a square pulse of 3 ns using eight laser beams (two bundles). To satisfy the requirements for the safety, efficiency, and quality, an alignment system is developed for this facility. This alignment system can perform automatic alignment of the preamplifier system, main amplifier system, and harmonic conversion system within 30 min before every shot during the routine operation of the facility. In this article, an overview of the alignment system is first presented. Then, its alignment characteristics are discussed, along with the alignment process. Finally, experimental results, including the alignment results and the facility performance, are reported. The results show that the far-field beam pointing alignment accuracy is better than 3 μrad, and the alignment error of the near-field beam centering is no larger than 1 mm. These satisfy the design requirements very well.

  5. Design optimization of single-main-amplifier KrF laser-fusion systems

    International Nuclear Information System (INIS)

    Harris, D.B.; Pendergrass, J.H.

    1985-01-01

    KrF lasers appear to be a very promising laser fusion driver for commercial applications. The Large Amplifier Module for the Aurora Laser System at Los Alamos is the largest KrF laser in the world and is currently operating at 5 kJ with 10 to 15 kJ eventually expected. The next generation system is anticipated to be a single-main-amplifier system that generates approximately 100 kJ. This paper examines the cost and efficiency tradeoffs for a complete single-main-amplifier KrF laser fusion experimental facility. It has been found that a 7% efficient $310/joule complete laser-fusion system is possible by using large amplifier modules and high optical fluences

  6. A Spectroscopic Comparison of Femtosecond Laser Modified Fused Silica using kHz and MHz Laser Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Reichman, W J; Krol, D M; Shah, L; Yoshino, F; Arai, A; Eaton, S M; Herman, P R

    2005-09-29

    Waveguides were written in fused silica using both a femtosecond fiber laser with a 1 MHz pulse repetition rate and a femtosecond amplified Ti:sapphire laser with a 1 kHz repetition rate. Confocal Raman and fluorescence microscopy were used to study structural changes in the waveguides written with both systems. A broad fluorescence band, centered at 650 nm, associated with non-bridging oxygen hole center (NBOHC) defects was observed after waveguide fabrication with the MHz laser. With the kHz laser system these defects were only observed for pulse energies above 1 {mu}J. Far fewer NBOHC defects were formed with the MHz laser than with kHz writing, possibly due to thermal annealing driven by heat accumulation effects at 1 MHz. When the kHz laser was used with pulse energies below 1 {mu}J, the predominant fluorescence was centered at 550 nm, a band assigned to the presence of silicon clusters (E{prime}{sub {delta}}). We also observed an increase in the intensity of the 605 cm{sup -1} Raman peak relative to the total Raman intensity, corresponding to an increase in the concentration of 3-membered rings in the lines fabricated with both laser systems.

  7. Railgun system using a laser-induced plasma armature

    Energy Technology Data Exchange (ETDEWEB)

    Onozuka, M.; Oda, Y.; Azuma, K. [Mitsubishi Heavy Industries, Ltd., 3-3-1, Minatomirai, Nishi-ku, Yokohama 220-84 (Japan)

    1996-05-01

    Development of an electromagnetic railgun system that utilizes a laser-induced plasma armature formation has been conducted to investigate the application of the railgun system for high-speed pellet injection into fusion plasmas. Using the laser-induced plasma formation technique, the required breakdown voltage was reduced by one-tenth compared with that for the spark-discharged plasma. The railgun system successfully accelerated the laser-induced plasma armature by an electromagnetic force that accelerated the pellet. The highest velocity of the solid hydrogen pellets, obtained so far, was 2.6 km/sec using a 2m-long railgun. {copyright} {ital 1996 American Institute of Physics.}

  8. A laser calibration system for the STAR TPC

    CERN Document Server

    Lebedev, A

    2002-01-01

    A Time Projection Chamber (TPC) is the primary tracking detector for the STAR experiment at RHIC. A laser calibration system was built to calibrate and monitor the TPC tracking performance. The laser system uses a novel design which produces approx 500 thin, ionizing beams distributed throughout the tracking volume. This new approach is significantly simpler than the traditional ones, and provides complete TPC coverage at a reduced cost. The laser system was used during the RHIC 2000 summer run to measure drift velocities with about 0.02% accuracy and to monitor the TPC performance. Calibration runs were made with and without a magnetic field to check B field map corrections.

  9. Thin-Film Polarizers for the OMEGA EP Laser System

    International Nuclear Information System (INIS)

    Oliver, J.B.; Rigatti, A.L.; Howe, J.D.; Keck, J.; Szczepanski, J.; Schmid, A.W.; Papernov, S.; Kozlov, A.; Kosc, T.Z.

    2006-01-01

    Thin-film polarizers are essential components of large laser systems such as OMEGA EP and the NIF because of the need to switch the beam out of the primary laser cavity (in conjunction with a plasma-electrode Pockels cell) as well as providing a well-defined linear polarization for frequency conversion and protecting the system from back-reflected light. The design and fabrication of polarizers for pulse-compressed laser systems is especially challenging because of the spectral bandwidth necessary for chirped-pulse amplification

  10. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  11. Control system for high power laser drilling workover and completion unit

    Science.gov (United States)

    Zediker, Mark S; Makki, Siamak; Faircloth, Brian O; DeWitt, Ronald A; Allen, Erik C; Underwood, Lance D

    2015-05-12

    A control and monitoring system controls and monitors a high power laser system for performing high power laser operations. The control and monitoring system is configured to perform high power laser operation on, and in, remote and difficult to access locations.

  12. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  13. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  14. Blue laser phase change recording system

    International Nuclear Information System (INIS)

    Hofmann, Holger; Dambach, S.Soeren; Richter, Hartmut

    2002-01-01

    The migration paths from DVD phase change recording with red laser to the next generation optical disk formats with blue laser and high NA optics are discussed with respect to optical aberration margins and disc capacities. A test system for the evaluation of phase change disks with more than 20 GB capacity is presented and first results of the recording performance are shown

  15. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  16. Synthetic vision and memory for autonomous virtual humans

    OpenAIRE

    PETERS, CHRISTOPHER; O'SULLIVAN, CAROL ANN

    2002-01-01

    PUBLISHED A memory model based on ?stage theory?, an influential concept of memory from the field of cognitive psychology, is presented for application to autonomous virtual humans. The virtual human senses external stimuli through a synthetic vision system. The vision system incorporates multiple modes of vision in order to accommodate a perceptual attention approach. The memory model is used to store perceived and attended object information at different stages in a filtering...

  17. Optimized laser system for decontamination of painted surfaces

    International Nuclear Information System (INIS)

    Champonnois, F.; Lascoutouna, C.; Long, H.; Thro, P.Y.; Mauchien, P.

    2010-01-01

    Laser systems have long been seen as potentially very interesting for removing contamination from surfaces. The main expected advantages are the possibility of remote process and the absence of secondary waste. However these systems were unable to find their way to an industrial deployment due to the lack of reliability of the laser and the difficulty to satisfactory collect the (contaminated) ablated matter. In this contribution we report on a compact, reliable and efficient laser decontaminating system called ASPILASERO. It is adapted to the constraints bound to a nuclear environment. It takes advantages of the recent progress made by the fibre lasers which have now a lifetime longer than 20000 hours without maintenance. The collecting system collects all the removed matter (gases and aerosols) on nuclear grade filters. The fully automated system has been successfully tested on a vertical wall of a stopped nuclear installation. It has demonstrated an efficiency of 1 m 2 /hr which is in the same order of other classical techniques but with a much lower quantity of waste and the ability to work continuously without human intervention. Measurements performed after the laser treatment have shown that the contamination was completely removed by removing the paint and that this contamination was not re-deposited elsewhere on the wall. The system will also be used in highly contaminated hot cells to decrease the radiation and allow maintenance or refurbishing in safe working conditions. (authors)

  18. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  19. Laser display system for multi-depth screen projection scenarios.

    Science.gov (United States)

    La Torre, J Pablo; Mayes, Nathan; Riza, Nabeel A

    2017-11-10

    Proposed is a laser projection display system that uses an electronically controlled variable focus lens (ECVFL) to achieve sharp and in-focus image projection over multi-distance three-dimensional (3D) conformal screens. The system also functions as an embedded distance sensor that enables 3D mapping of the multi-level screen platform before the desired laser scanned beam focused/defocused projected spot sizes are matched to the different localized screen distances on the 3D screen. Compared to conventional laser scanning and spatial light modulator (SLM) based projection systems, the proposed design offers in-focus non-distorted projection over a multi-distance screen zone with varying depths. An experimental projection system for a screen depth variation of 65 cm is demonstrated using a 633 nm laser beam, 3 KHz scan speed galvo-scanning mirrors, and a liquid-based ECVFL. As a basic demonstration, an in-house developed MATLAB based graphic user interface is deployed to work along with the laser projection display, enabling user inputs like text strings or predefined image projection. The user can specify projection screen distance, scanned laser linewidth, projected text font size, projected image dimensions, and laser scanning rate. Projected images are shown highlighting the 3D control capabilities of the display, including the production of a non-distorted image onto two-depths versus a distorted image via dominant prior-art projection methods.

  20. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    Science.gov (United States)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  1. Energy storage and power conditioning system for the Shiva laser

    International Nuclear Information System (INIS)

    Allen, G.R.; Gagnon, W.L.; Rupert, P.R.; Trenholme, J.B.

    1975-01-01

    An optimal energy delivery system for the world's largest glass laser system has been designed based on computer modeling and operation of laser hardware. Components of the system have been tested on operating lasers at LLL. The Shiva system is now under construction and will be completed in 1977. The energy supply described here will provide cost-effective, reliable power and facilitate the gathering of data in pursuit of controlled thermonuclear reactions

  2. Evaluating the image quality of Closed Circuit Television magnification systems versus a head-mounted display for people with low vision. .

    Science.gov (United States)

    Lin, Chern Sheng; Jan, Hvey-An; Lay, Yun-Long; Huang, Chih-Chia; Chen, Hsien-Tse

    2014-01-01

    In this research, image analysis was used to optimize the visual output of a traditional Closed Circuit Television (CCTV) magnifying system and a head-mounted display (HMD) for people with low vision. There were two purposes: (1) To determine the benefit of using an image analysis system to customize image quality for a person with low vision, and (2) to have people with low vision evaluate a traditional CCTV magnifier and an HMD, each customized to the user's needs and preferences. A CCTV system can electronically alter images by increasing the contrast, brightness, and magnification for the visually disabled when they are reading texts and pictures. The test methods was developed to evaluate and customize a magnification system for persons with low vision. The head-mounted display with CCTV was used to obtain better depth of field and a higher modulation transfer function from the video camera. By sensing the parameters of the environment (e.g., ambient light level, etc.) and collecting the user's specific characteristics, the system could make adjustments according to the user's needs, thus allowing the visually disabled to read more efficiently.

  3. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  4. Magnetically switched power supply system for lasers

    Science.gov (United States)

    Pacala, Thomas J. (Inventor)

    1987-01-01

    A laser power supply system is described in which separate pulses are utilized to avalanche ionize the gas within the laser and then produce a sustained discharge to cause the gas to emit light energy. A pulsed voltage source is used to charge a storage device such as a distributed capacitance. A transmission line or other suitable electrical conductor connects the storage device to the laser. A saturable inductor switch is coupled in the transmission line for containing the energy within the storage device until the voltage level across the storage device reaches a predetermined level, which level is less than that required to avalanche ionize the gas. An avalanche ionization pulse generating circuit is coupled to the laser for generating a high voltage pulse of sufficient amplitude to avalanche ionize the laser gas. Once the laser gas is avalanche ionized, the energy within the storage device is discharged through the saturable inductor switch into the laser to provide the sustained discharge. The avalanche ionization generating circuit may include a separate voltage source which is connected across the laser or may be in the form of a voltage multiplier circuit connected between the storage device and the laser.

  5. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  6. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  7. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  8. Laser-ranging scanning system to observe topographical deformations of volcanoes.

    Science.gov (United States)

    Aoki, T; Takabe, M; Mizutani, K; Itabe, T

    1997-02-20

    We have developed a laser-ranging system to observe the topographical structure of volcanoes. This system can be used to measure the distance to a target by a laser and shows the three-dimensional topographical structure of a volcano with an accuracy of 30 cm. This accuracy is greater than that of a typical laser-ranging system that uses a corner-cube reflector as a target because the reflected light jitters as a result of inclination and unevenness of the target ground surface. However, this laser-ranging system is useful for detecting deformations of topographical features in which placement of a reflector is difficult, such as in volcanic regions.

  9. High speed laser tomography system

    Science.gov (United States)

    Samsonov, D.; Elsaesser, A.; Edwards, A.; Thomas, H. M.; Morfill, G. E.

    2008-03-01

    A high speed laser tomography system was developed capable of acquiring three-dimensional (3D) images of optically thin clouds of moving micron-sized particles. It operates by parallel-shifting an illuminating laser sheet with a pair of galvanometer-driven mirrors and synchronously recording two-dimensional (2D) images of thin slices of the imaged volume. The maximum scanning speed achieved was 120000slices/s, sequences of 24 volume scans (up to 256 slices each) have been obtained. The 2D slices were stacked to form 3D images of the volume, then the positions of the particles were identified and followed in the consecutive scans. The system was used to image a complex plasma with particles moving at speeds up to cm/s.

  10. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  11. Blue laser diode (450 nm) systems for welding copper

    Science.gov (United States)

    Silva Sa, M.; Finuf, M.; Fritz, R.; Tucker, J.; Pelaprat, J.-M.; Zediker, M. S.

    2018-02-01

    This paper will discuss the development of high power blue laser systems for industrial applications. The key development enabling high power blue laser systems is the emergence of high power, high brightness laser diodes at 450 nm. These devices have a high individual brightness rivaling their IR counterparts and they have the potential to exceed their performance and price barriers. They also have a very high To resulting in a 0.04 nm/°C wavelength shift. They have a very stable lateral far-field profile which can be combined with other diodes to achieve a superior brightness. This paper will report on the characteristics of the blue laser diodes, their integration into a modular laser system suitable for scaling the output power to the 1 kW level and beyond. Test results will be presented for welding of copper with power levels ranging from 150 Watts to 600 Watts

  12. High-speed potato grading and quality inspection based on a color vision system

    Science.gov (United States)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  13. Real-time machine vision system using FPGA and soft-core processor

    Science.gov (United States)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  14. Development of the power control system for semiconductor lasers

    International Nuclear Information System (INIS)

    Kim, Kwang Suk; Kim, Cheol Jung

    1997-12-01

    For the first year plan of this program, we developed the power control system for semiconductor lasers. We applied the high-current switching mode techniques to fabricating a power control system. Then, we investigated the direct side pumping techniques with GaA1As diode laser bars to laser crystal without pumping optics. We obtained 0.5W average output power from this DPSSL. (author). 54 refs., 3 tabs., 18 figs

  15. Mechanical design for a large fusion laser system

    International Nuclear Information System (INIS)

    Hurley, C.A.

    1979-01-01

    The Nova Mechanical Systems Group at LLL is responsible for the design, fabrication, and installation of all laser chain components, for the stable support structure that holds them, and for the beam lines that transport the laser beam to the target system. This paper is an overview of the group's engineering effort, emphasizing new developments

  16. Solid-state disk amplifiers for fusion-laser systems

    Energy Technology Data Exchange (ETDEWEB)

    Martin, W.E.; Trenholme, J.B.; Linford, G.J.; Yarema, S.M.; Hurley, C.A.

    1981-09-01

    We review the design, performance, and operation of large-aperture (10 to 46 cm) solid-state disk amplifiers for use in laser systems. We present design data, prototype tests, simulations, and projections for conventional cylindrical pump-geometry amplifiers and rectangular pump-geometry disk amplifiers. The design of amplifiers for the Nova laser system is discussed.

  17. A noncontact laser system for measuring soil surface topography

    International Nuclear Information System (INIS)

    Huang, C.; White, I.; Thwaite, E.G.; Bendeli, A.

    1988-01-01

    Soil surface topography profoundly influences runoff hydrodynamics, soil erosion, and surface retention of water. Here we describe an optical noncontact system for measuring soil surface topography. Soil elevation is measured by projecting a laser beam onto the surface and detecting the position of the interception point. The optical axis of the detection system is oriented at a small angle to the incident beam. A low-power HeNe (Helium-Neon) laser is used as the laser source, a photodiode array is used as the laser image detector and an ordinary 35-mm single lens reflex camera provides the optical system to focus the laser image onto the diode array. A wide spectrum of measurement ranges (R) and resolutions are selectable, from 1 mm to 1 m. These are determined by the laser-camera distance and angle, the focal length of the lens, and the sensing length of the diode array and the number of elements (N) contained in the array. The resolution of the system is approximately R/2N. We show for the system used here that this resolution is approximately 0.2%. In the configuration selected, elevation changes of 0.16 mm could be detected over a surface elevation range of 87 mm. The sampling rate of the system is 1000 Hz, which permits soil surfaces to be measured at speeds of up to 1 m s −1 with measurements taken at 1-mm spacing. Measurements of individual raindrop impacts on the soil and of soil surfaces before and after rain show the versatility of the laser surface profiler, which has applications in studies of erosion processes, surface storage and soil trafficability

  18. Development of an integrated automated retinal surgical laser system.

    Science.gov (United States)

    Barrett, S F; Wright, C H; Oberg, E D; Rockwell, B A; Cain, C; Rylander, H G; Welch, A J

    1996-01-01

    Researchers at the University of Texas and the USAF Academy have worked toward the development of a retinal robotic laser system. The overall goal of this ongoing project is to precisely place and control the depth of laser lesions for the treatment of various retinal diseases such as diabetic retinopathy and retinal tears. Separate low speed prototype subsystems have been developed to control lesion depth using lesion reflectance feedback parameters and lesion placement using retinal vessels as tracking landmarks. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Preliminary testing on rhesus primate subjects have been accomplished with the CW argon laser and also the ultrashort pulse laser. Recent efforts have concentrated on combining the two subsystems into a single prototype capable of simultaneously controlling both lesion depth and placement. We have designated this combined system CALOSOS for Computer Aided Laser Optics System for Ophthalmic Surgery. Several interesting areas of study have developed in integrating the two subsystems: 1) "doughnut" shaped lesions that occur under certain combinations of laser power, spot size, and irradiation time complicating measurements of central lesion reflectance, 2) the optimal retinal field of view (FOV) to achieve both tracking and lesion parameter control, and 3) development of a hybrid analog/digital tracker using confocal reflectometry to achieve retinal tracking speeds of up to 100 dgs. This presentation will discuss these design issues of this clinically significant prototype system. Details of the hybrid prototype system are provided in "Hybrid Eye Tracking for Computer-Aided Retinal Surgery" at this conference. The paper will close with remaining technical hurdles to clear prior to testing the full-up clinical prototype system.

  19. ROV-based Underwater Vision System for Intelligent Fish Ethology Research

    Directory of Open Access Journals (Sweden)

    Rui Nian

    2013-09-01

    Full Text Available Fish ethology is a prospective discipline for ocean surveys. In this paper, one ROV-based system is established to perform underwater visual tasks with customized optical sensors installed. One image quality enhancement method is first presented in the context of creating underwater imaging models combined with homomorphic filtering and wavelet decomposition. The underwater vision system can further detect and track swimming fish from the resulting images with the strategies developed using curve evolution and particular filtering, in order to obtain a deeper understanding of fish behaviours. The simulation results have shown the excellent performance of the developed scheme, in regard to both robustness and effectiveness.

  20. Synchronised laser chaos communication: statistical investigation of an experimental system

    OpenAIRE

    Lawrance, Anthony J.; Papamarkou, Theodore; Uchida, Atsushi

    2017-01-01

    The paper is concerned with analyzing data from an experimental antipodal laser-based chaos shift-keying communication system. Binary messages are embedded in a chaotically behaving laser wave which is transmitted through a fiber-optic cable and are decoded at the receiver using a second laser synchronized with the emitter laser. Instrumentation in the experimental system makes it particularly interesting to be able to empirically analyze both optical noise and synchronization error as well a...

  1. Coherent Laser Radar Metrology System for Large Scale Optical Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — A new type of laser radar metrology inspection system is proposed that incorporates a novel, dual laser coherent detection scheme capable of eliminating both...

  2. Laser cutting system in bridge fabricating line; Kyoryo seisaku line ni okeru laser no setsudan system

    Energy Technology Data Exchange (ETDEWEB)

    Kitaguchi, Y.; Yokotani, K. [Hitachi Zosen Corp., Osaka (Japan)

    1994-11-01

    This paper describes the laser cutting system established at a new advanced plant that was constructed by Hitachi Shipbuilding and Engineering Co., Ltd. in 1993. At the plant, the cutting line consists of four NC cutting lines: the plasma cutting machine, gas cutting machine, frame planer, and laser cutting machine. The laser cutting machine is used to cut complex shapes of relatively thin (6 - 16 mm) materials with high accuracy. The machine consists of a 3 kW CO2 laser oscillator mounted gantry type NC cutter and a slat conveyor of about 30 m long, with the maximum cutting width of 3.6 m. The NC cutting machine is provided with the automatic printing function using NC data, marking function, scheduled operation function, steel plate detector, and coordinate rotation function, etc. These functions enable unattended operation of the machine to cut multiple materials. This NC laser cutting line has the same performance data collection function for data during the operating time as other production lines. Therefore, the NC laser cutting line can be subjected to the realtime centralized control together with the other lines. All these technologies have provided high accuracy and efficiency for production as well as an environment in which many female operators can successfully work. 10 figs., 4 tabs.

  3. Recent laser physics results on power balance and frequency conversion with the Phebus laser system

    International Nuclear Information System (INIS)

    Thiell, G.; Paye, J.; Graillot, H.; Mathieu, F.; Boscheron, A.; Reynier, F.; Estraillier, P.; Bruneau, J.L.

    1995-01-01

    The Phebus laser system has been mainly devoted to plasma physics experiments such as implosion and hydrodynamical instability studies since it was completed in 1985. But during the last two years, the three Phebus beamlines (2 main beams and a backlighter beam) are also utilized to perform some laser physics studies in view of the Megajoule laser project. The goal of the laser physics experiments conducted at the Phebus facility in 1994--1995 is to validate some design issues of the Megajoule Laser project concerning namely power balance and frequency conversion

  4. Development and application of an automatic system for measuring the laser camera

    International Nuclear Information System (INIS)

    Feng Shuli; Peng Mingchen; Li Kuncheng

    2004-01-01

    Objective: To provide an automatic system for measuring imaging quality of laser camera, and to make an automatic measurement and analysis system. Methods: On the special imaging workstation (SGI 540), the procedure was written by using Matlab language. An automatic measurement and analysis system of imaging quality for laser camera was developed and made according to the imaging quality measurement standard of laser camera of International Engineer Commission (IEC). The measurement system used the theories of digital signal processing, and was based on the characteristics of digital images, as well as put the automatic measurement and analysis of laser camera into practice by the affiliated sample pictures of the laser camera. Results: All the parameters of imaging quality of laser camera, including H-D and MTF curve, low and middle and high resolution of optical density, all kinds of geometry distort, maximum and minimum density, as well as the dynamic range of gray scale, could be measured by this system. The system was applied for measuring the laser cameras in 20 hospitals in Beijing. The measuring results showed that the system could provide objective and quantitative data, and could accurately evaluate the imaging quality of laser camera, as well as correct the results made by manual measurement based on the affiliated sample pictures of the laser camera. Conclusion: The automatic measuring system of laser camera is an effective and objective tool for testing the quality of the laser camera, and the system makes a foundation for the future research

  5. Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems

    OpenAIRE

    Kootstra, Gert; Bergström, Niklas; Kragic, Danica

    2011-01-01

    Gestalt psychology studies how the human visual system organizes the complex visual input into unitary elements. In this paper we show how the Gestalt principles for perceptual grouping and for figure-ground segregation can be used in computer vision. A number of studies will be shown that demonstrate the applicability of Gestalt principles for the prediction of human visual attention and for the automatic detection and segmentation of unknown objects by a robotic system. QC 20111115 E...

  6. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    Science.gov (United States)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  7. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  8. Bond strength of an adhesive system irradiated with Nd:YAG laser in dentin treated with Er:YAG laser

    International Nuclear Information System (INIS)

    Malta, D A M P; De Andrade, M F; Costa, M M; Lizarelli, R F Z; Pelino, J E P

    2008-01-01

    The purpose of this in vitro study was to verify through micro tensile bond test the bond strength of an adhesive system irradiated with Nd:YAG laser in dentine previously treated with Er:YAG laser. Twenty caries free extracted human third molars were used. The teeth were divided in four experimental groups (n = 5): (G1) control group; (G2) irradiation of the adhesive system with the Nd:YAG laser; (G3) dentin treatment with Er:YAG laser; (G4) dentin treatment with Er:YAG laser followed by the irradiation of the adhesive system with Nd:YAG laser. The Er:YAG laser fluency parameter for the dentin treatment was of 60 J/cm 2 . The adhesive system was irradiated with the Nd:YAG laser with fluency of 100 J/cm 2 . Dental restorations were performed with Adper Single Bond 2/Z250. One tooth from each group was prepared for the evaluation of the adhesive interface under SEM and bond failure tests were also performed and evaluated. The statistical analysis showed statistical significant difference between the groups G1 and G3, G1 and G4, G2 and G3, and G2 and G4; and similarity between the groups G1 and G2, and G3 and G4. The adhesive failures were predominant in all the experimental groups. The SEM analysis showed an adhesive interface with features confirming the results of the mechanical tests. The Nd:YAG laser on the adhesive system did not influence the bond strength in dentin treated or not with the Er:YAG laser

  9. Optical system for laser triggering of PBFA II

    International Nuclear Information System (INIS)

    Hamil, R.A.; Seamons, L.O.; Schanwald, L.P.; Gerber, R.A.

    1985-01-01

    The PBFA II laser triggering optical system consists of nearly 300 optical components. These optics must be sufficiently precise to preserve the laser beam quality, as well as to equally distribute the energy of the UV laser beam to the 36, 5.5 MV gas-filled switches at precisely the same instant. Both index variation and cleanliness of the air long the laser path must be controlled. The manual alignment system is capable of alignment to better than the acceptable error of 200 microradians (laser to switches). A technique has been devised to ease the alignment procedure by using a special high gain video camera and a tool alignment telescope to view retroreflective tape targets having optical brightness gains over white surfaces of 10/sup 3/. The camera is a charge-coupled detector intensified by a double microchannel plate having an optical gain of between 10/sup 4/ and 10/sup 5/

  10. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  11. Lasers and power systems for inertial confinement fusion reactors

    International Nuclear Information System (INIS)

    Stark, E.E. Jr.

    1978-01-01

    After discussing the role of lasers in ICF and the candidate lasers, several important areas of technology requirements are discussed. These include the beam transport system, the pulsed power system and the gas flow system. The system requirements, state of the art, as well as needs and prospects for new technology developments are given. Other technology issues and promising developments are described briefly

  12. Vacuum mechatronic laser alignment system on the Nova laser

    International Nuclear Information System (INIS)

    Holliday, M.; Wong, K.; Shelton, R.

    1991-11-01

    The experiments conducted on NOVA are done to investigate inertially confined laser fusion reactions. To this end, the ten beams of the laser are aligned to within 30mm. The target chamber employs a vacuum mechatronic based reticle/target positioning system to accomplish this. It is a five degree-of-freedom chamber resident system, known as the Alignment Aids Positioner or AAP. The AAP aids in beam and diagnostic alignment by accurately positioning a reticle at target chamber center to with 7mm. The AAP system increases target positioning and alignment flexibility and accuracy through the use of a computer controlled multi degree-of-freedom stage assembly. This device uses microstepping DC stepper motors with encoders to achieve closed loop control in a 10 -6 torr vacuum. The AAP has two positioning regimes to move the alignment reticle and do beam alignment. One is course positioning in the Y-Z plane that moves a high resolution stage assembly to target chamber center. The other regime is high resolution movement in the X,Y,Z and q directions. 5 refs., 9 figs

  13. Numerical model and analysis of an energy-based system using microwaves for vision correction

    Science.gov (United States)

    Pertaub, Radha; Ryan, Thomas P.

    2009-02-01

    A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

  14. BENCHMARKING MOBILE LASER SCANNING SYSTEMS USING A PERMANENT TEST FIELD

    Directory of Open Access Journals (Sweden)

    H. Kaartinen

    2012-07-01

    Full Text Available The objective of the study was to benchmark the geometric accuracy of mobile laser scanning (MLS systems using a permanent test field under good coverage of GNSS. Mobile laser scanning, also called mobile terrestrial laser scanning, is currently a rapidly developing area in laser scanning where laser scanners, GNSS and IMU are mounted onboard a moving vehicle. MLS can be considered to fill the gap between airborne and terrestrial laser scanning. Data provided by MLS systems can be characterized with the following technical parameters: a point density in the range of 100-1000 points per m2 at 10 m distance, b distance measurement accuracy of 2-5 cm, and c operational scanning range from 1 to 100 m. Several commercial, including e.g. Riegl, Optech and others, and some research mobile laser scanning systems surveyed the test field using predefined driving speed and directions. The acquired georeferenced point clouds were delivered for analyzing. The geometric accuracy of the point clouds was determined using the reference targets that could be identified and measured from the point cloud. Results show that in good GNSS conditions most systems can reach an accuracy of 2 cm both in plane and elevation. The accuracy of a low cost system, the price of which is less than tenth of the other systems, seems to be within a few centimetres at least in ground elevation determination. Inaccuracies in the relative orientation of the instruments lead to systematic errors and when several scanners are used, in multiple reproductions of the objects. Mobile laser scanning systems can collect high density point cloud data with high accuracy. A permanent test field suits well for verifying and comparing the performance of different mobile laser scanning systems. The accuracy of the relative orientation between the mapping instruments needs more attention. For example, if the object is seen double in the point cloud due to imperfect boresight calibration between two

  15. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    Energy Technology Data Exchange (ETDEWEB)

    Harp, Benton; Charboneau, Stacy; Olds, Erik [US DOE (United States)

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations

  16. Mode-Locked Semiconductor Lasers for Optical Communication Systems

    DEFF Research Database (Denmark)

    Yvind, Kresten; Larsson, David; Oxenløwe, Leif Katsuo

    2005-01-01

    We present investigations on 10 and 40 GHz monolithic mode-locked lasers for applications in optical communications systems. New all-active lasers with one to three quantum wells have been designed, fabricated and characterized....

  17. Development of YAG laser cutting system for decommissioning nuclear equipments

    International Nuclear Information System (INIS)

    Kasai, Takeshi; Nitta, Kazuhiko; Hosoda, Hiroshi.

    1995-01-01

    Technology of remote controlled cutting and reduction of generative secondary products have been required to the cutting system for decommissioning nuclear equipments. At a point of view that laser cutting technology by use of a Nd:YAG laser is effective, we have developed the laser cutting machine and carried out cutting tests for several stainless steel plates. As a result, the stainless steel plate with a thickness of 22mm could be cut by using an optical fiber which can flexibly propagate laser power, and possibility of application of this laser cutting system to decommissioning nuclear equipments was verified. (author)

  18. Development of YAG laser cutting system for decommissioning nuclear equipments

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takeshi [Fuji Electric Co. Research and Development Ltd., Yokosuka, Kanagawa (Japan); Nitta, Kazuhiko; Hosoda, Hiroshi

    1995-07-01

    Technology of remote controlled cutting and reduction of generative secondary products have been required to the cutting system for decommissioning nuclear equipments. At a point of view that laser cutting technology by use of a Nd:YAG laser is effective, we have developed the laser cutting machine and carried out cutting tests for several stainless steel plates. As a result, the stainless steel plate with a thickness of 22mm could be cut by using an optical fiber which can flexibly propagate laser power, and possibility of application of this laser cutting system to decommissioning nuclear equipments was verified. (author).

  19. Optical response in a laser-driven quantum pseudodot system

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, D. Gul [Physics Department, Graduate School of Natural and Applied Sciences, Dokuz Eylül University, 35390 Izmir (Turkey); Sakiroglu, S., E-mail: serpil.sakiroglu@deu.edu.tr [Physics Department, Faculty of Science, Dokuz Eylül University, 35390 Izmir (Turkey); Ungan, F.; Yesilgul, U. [Department of Optical Engineering, Faculty of Technology, Cumhuriyet University, 58140 Sivas (Turkey); Kasapoglu, E. [Physics Department, Faculty of Science, Cumhuriyet University, 58140 Sivas (Turkey); Sari, H. [Department of Primary Education, Faculty of Education, Cumhuriyet University, 58140 Sivas (Turkey); Sokmen, I. [Physics Department, Faculty of Science, Dokuz Eylül University, 35390 Izmir (Turkey)

    2017-03-15

    We investigate theoretically the intense laser-induced optical absorption coefficients and refractive index changes in a two-dimensional quantum pseudodot system under an uniform magnetic field. The effects of non-resonant, monochromatic intense laser field upon the system are treated within the framework of high-frequency Floquet approach in which the system is supposed to be governed by a laser-dressed potential. Linear and nonlinear absorption coefficients and relative changes in the refractive index are obtained by means of the compact-density matrix approach and iterative method. The results of numerical calculations for a typical GaAs quantum dot reveal that the optical response depends strongly on the magnitude of external magnetic field and characteristic parameters of the confinement potential. Moreover, we have demonstrated that the intense laser field modifies the confinement and thereby causes remarkable changes in the linear and nonlinear optical properties of the system.

  20. Optical response in a laser-driven quantum pseudodot system

    International Nuclear Information System (INIS)

    Kilic, D. Gul; Sakiroglu, S.; Ungan, F.; Yesilgul, U.; Kasapoglu, E.; Sari, H.; Sokmen, I.

    2017-01-01

    We investigate theoretically the intense laser-induced optical absorption coefficients and refractive index changes in a two-dimensional quantum pseudodot system under an uniform magnetic field. The effects of non-resonant, monochromatic intense laser field upon the system are treated within the framework of high-frequency Floquet approach in which the system is supposed to be governed by a laser-dressed potential. Linear and nonlinear absorption coefficients and relative changes in the refractive index are obtained by means of the compact-density matrix approach and iterative method. The results of numerical calculations for a typical GaAs quantum dot reveal that the optical response depends strongly on the magnitude of external magnetic field and characteristic parameters of the confinement potential. Moreover, we have demonstrated that the intense laser field modifies the confinement and thereby causes remarkable changes in the linear and nonlinear optical properties of the system.