WorldWideScience

Sample records for machine vision systems

  1. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  2. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  3. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  4. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  5. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  6. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  7. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  8. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  9. Machine vision system for measuring conifer seedling morphology

    Science.gov (United States)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  10. Machine vision system for automated detection of stained pistachio nuts

    Science.gov (United States)

    Pearson, Tom C.

    1995-01-01

    A machine vision system was developed to separate stained pistachio nuts, which comprise of about 5% of the California crop, from unstained nuts. The system may be used to reduce labor involved with manual grading or to remove aflatoxin contaminated product from low grade process streams. The system was tested on two different pistachio process streams: the bi- chromatic color sorter reject stream and the small nut shelling stock stream. The system had a minimum overall error rate of 14% for the bi-chromatic sorter reject stream and 15% for the small shelling stock stream.

  11. Development of machine vision system for PHWR fuel pellet inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kamalesh Kumar, B.; Reddy, K.S.; Lakshminarayana, A.; Sastry, V.S.; Ramana Rao, A.V. [Nuclear Fuel Complex, Hyderabad, Andhra Pradesh (India); Joshi, M.; Deshpande, P.; Navathe, C.P.; Jayaraj, R.N. [Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh (India)

    2008-07-01

    Nuclear Fuel Complex, a constituent of Department of Atomic Energy; India is responsible for manufacturing nuclear fuel in India . Over a million Uranium-di-oxide pellets fabricated per annum need visual inspection . In order to overcome the limitations of human based visual inspection, NFC has undertaken the development of machine vision system. The development involved designing various subsystems viz. mechanical and control subsystem for handling and rotation of fuel pellets, lighting subsystem for illumination, image acquisition system, and image processing system and integration. This paper brings out details of various subsystems and results obtained from the trials conducted. (author)

  12. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  13. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  14. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  15. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  16. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  17. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  18. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  19. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  20. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  1. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  2. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  3. A low-cost machine vision system for the recognition and sorting of small parts

    Science.gov (United States)

    Barea, Gustavo; Surgenor, Brian W.; Chauhan, Vedang; Joshi, Keyur D.

    2018-04-01

    An automated machine vision-based system for the recognition and sorting of small parts was designed, assembled and tested. The system was developed to address a need to expose engineering students to the issues of machine vision and assembly automation technology, with readily available and relatively low-cost hardware and software. This paper outlines the design of the system and presents experimental performance results. Three different styles of plastic gears, together with three different styles of defective gears, were used to test the system. A pattern matching tool was used for part classification. Nine experiments were conducted to demonstrate the effects of changing various hardware and software parameters, including: conveyor speed, gear feed rate, classification, and identification score thresholds. It was found that the system could achieve a maximum system accuracy of 95% at a feed rate of 60 parts/min, for a given set of parameter settings. Future work will be looking at the effect of lighting.

  4. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  5. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  6. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  7. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  8. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  9. Principles of image processing in machine vision systems for the color analysis of minerals

    Science.gov (United States)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  10. CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS

    Science.gov (United States)

    The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...

  11. A Multiple Sensor Machine Vision System Technology for the Hardwood

    Science.gov (United States)

    Richard W. Conners; D.Earl Kline; Philip A. Araman

    1995-01-01

    For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...

  12. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  13. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  14. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  15. A New Approach to Spindle Radial Error Evaluation Using a Machine Vision System

    Directory of Open Access Journals (Sweden)

    Kavitha C.

    2017-03-01

    Full Text Available The spindle rotational accuracy is one of the important issues in a machine tool which affects the surface topography and dimensional accuracy of a workpiece. This paper presents a machine-vision-based approach to radial error measurement of a lathe spindle using a CMOS camera and a PC-based image processing system. In the present work, a precisely machined cylindrical master is mounted on the spindle as a datum surface and variations of its position are captured using the camera for evaluating runout of the spindle. The Circular Hough Transform (CHT is used to detect variations of the centre position of the master cylinder during spindle rotation at subpixel level from a sequence of images. Radial error values of the spindle are evaluated using the Fourier series analysis of the centre position of the master cylinder calculated with the least squares curve fitting technique. The experiments have been carried out on a lathe at different operating speeds and the spindle radial error estimation results are presented. The proposed method provides a simpler approach to on-machine estimation of the spindle radial error in machine tools.

  16. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  17. Real-time machine vision system using FPGA and soft-core processor

    Science.gov (United States)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  18. A real-time surface inspection system for precision steel balls based on machine vision

    Science.gov (United States)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  19. Machine vision system: a tool for quality inspection of food and agricultural products.

    Science.gov (United States)

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.

  20. Automatic turbot fish cutting using machine vision

    OpenAIRE

    Martín Rodríguez, Fernando; Barral Martínez, Mónica

    2015-01-01

    This paper is about the design of an automated machine to cut turbot fish specimens. Machine vision is a key part of this project as it is used to compute a cutting curve for specimen’s head. This task is impossible to be carried out by mechanical means. Machine vision is used to detect head boundary and a robot is used to cut the head. Afterwards mechanical systems are used to slice fish to get an easy presentation for end consumer (as fish fillets than can be easily marketed ...

  1. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  2. Detection of Two Types of Weed through Machine Vision System: Improving Site-Specific Spraying

    Directory of Open Access Journals (Sweden)

    S Sabzi

    2018-03-01

    Full Text Available Introduction With increase in world population, one of the approaches to provide food is using site-specific management system or so-called precision farming. In this management system, management of crop production inputs such as fertilizers, lime, herbicides, seed, etc. is done based on farm location features, with the aim of reducing waste, increasing revenues and maintaining environmental quality. Precision farming involves various aspects and is applicable on farm fields at all stages of tillage, planting, and harvesting. Today, in line with precision farming purposes, and to control weeds, pests, and diseases, all the efforts of specialists in precision farming is to reduce the amount of chemical substances in products. Although herbicides improve the quality and quantity of agricultural production, the possibility of applying inappropriately and unreasonably is very high. If the dose is too low, weed control is not performed correctly. Otherwise, If the dosage is too high, herbicides can be toxic for crops, can be transferred to soil and stay in it for a long time, and can penetrate to groundwater. By applying herbicides to variable rate, the potential for significant cost savings and reduced environmental damage to the products and environment will be possible. It is evident that in large-scale modern agriculture, individual management of each plant without using some advanced technologies is not possible. using machine vision systems is one of precision farming techniques to identify weeds. This study aimed to detect three plant such as Centaurea depressa M.B, Malvaneglecta and Potato plant using machine vision system. Materials and Methods In order to train algorithm of designed machine vision system, a platform that moved with the speed of 10.34 was used for shooting of Marfona potato fields. This platform was consisted of a chassis, camera (DFK23GM021,CMOS, 120 f/s, Made in Germany, and a processor system equipped with Matlab 2015

  3. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  4. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    Science.gov (United States)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  5. Tomato grading system using machine vision technology and neuro-fuzzy networks (ANFIS

    Directory of Open Access Journals (Sweden)

    H Izadi

    2016-04-01

    Full Text Available Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects and in recent century, Machine Vision technology has been used for shape sorting. The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert. Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health. The grading and sorting mechanism consisted of a lighting chamber (cloudy sky, lighting source and a digital camera connected to a computer. The images were recorded in a special chamber with an indirect radiation (cloudy sky with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens. Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1. For the first group; characteristics of the images were analysis that could offer information an surface area (S.A., maximum diameter (Dmax, minimum diameter (Dmin and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato; 1. Classified as red (red > 90% 2. Classified as red light (red or bold pink 60-90% 3. Classified as pink (red 30-60% 4. Classified as Turning

  6. A System of Driving Fatigue Detection Based on Machine Vision and Its Application on Smart Device

    Directory of Open Access Journals (Sweden)

    Wanzeng Kong

    2015-01-01

    Full Text Available Driving fatigue is one of the most important factors in traffic accidents. In this paper, we proposed an improved strategy and practical system to detect driving fatigue based on machine vision and Adaboost algorithm. Kinds of face and eye classifiers are well trained by Adaboost algorithm in advance. The proposed strategy firstly detects face efficiently by classifiers of front face and deflected face. Then, candidate region of eye is determined according to geometric distribution of facial organs. Finally, trained classifiers of open eyes and closed eyes are used to detect eyes in the candidate region quickly and accurately. The indexes which consist of PERCLOS and duration of closed-state are extracted in video frames real time. Moreover, the system is transplanted into smart device, that is, smartphone or tablet, due to its own camera and powerful calculation performance. Practical tests demonstrated that the proposed system can detect driver fatigue with real time and high accuracy. As the system has been planted into portable smart device, it could be widely used for driving fatigue detection in daily life.

  7. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    Science.gov (United States)

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  8. Color machine vision system for process control in the ceramics industry

    Science.gov (United States)

    Penaranda Marques, Jose A.; Briones, Leoncio; Florez, Julian

    1997-08-01

    This paper is focused on the design of a machine vision system to solve a problem found in the manufacturing process of high quality polished porcelain tiles. This consists of sorting the tiles according to the criteria 'same appearance to the human eye' or in other words, by color and visual texture. In 1994 this problem was tackled and led to a prototype which became fully operational at production scale in a manufacturing plant, named Porcelanatto, S.A. The system has evolved and has been adapted to meet the particular needs of this manufacturing company. Among the main issues that have been improved, it is worth pointing out: (1) improvement to discern subtle variations in color or texture, which are the main features of the visual appearance; (2) inspection time reduction, as a result of algorithm optimization and the increasing computing power. Thus, 100 percent of the production can be inspected, reaching a maximum of 120 tiles/sec.; (3) adaptation to the different types and models of tiles manufactured. The tiles vary not only in their visible patterns but also in dimensions, formats, thickness and allowances. In this sense, one major problem has been reaching an optimal compromise: The system must be sensitive enough to discern subtle variations in color, but at the same time insensitive thickness variations in the tiles. The following parts have been used to build the system: RGB color line scan camera, 12 bits per channel, PCI frame grabber, PC, fiber optic based illumination and the algorithm which will be explained in section 4.

  9. Machine Vision Tests for Spent Fuel Scrap Characteristics

    International Nuclear Information System (INIS)

    BERGER, W.W.

    2000-01-01

    The purpose of this work is to perform a feasibility test of a Machine Vision system for potential use at the Hanford K basins during spent nuclear fuel (SNF) operations. This report documents the testing performed to establish functionality of the system including quantitative assessment of results. Fauske and Associates, Inc., which has been intimately involved in development of the SNF safety basis, has teamed with Agris-Schoen Vision Systems, experts in robotics, tele-robotics, and Machine Vision, for this work

  10. Computer vision and machine learning for archaeology

    NARCIS (Netherlands)

    van der Maaten, L.J.P.; Boon, P.; Lange, G.; Paijmans, J.J.; Postma, E.

    2006-01-01

    Until now, computer vision and machine learning techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. The paper illustrates the use of computer vision techniques for

  11. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  12. Design, development and evaluation of an online grading system for peeled pistachios equipped with machine vision technology and support vector machine

    Directory of Open Access Journals (Sweden)

    Hosein Nouri-Ahmadabadi

    2017-12-01

    Full Text Available In this study, an intelligent system based on combined machine vision (MV and Support Vector Machine (SVM was developed for sorting of peeled pistachio kernels and shells. The system was composed of conveyor belt, lighting box, camera, processing unit and sorting unit. A color CCD camera was used to capture images. The images were digitalized by a capture card and transferred to a personal computer for further analysis. Initially, images were converted from RGB color space to HSV color ones. For segmentation of the acquired images, H-component in the HSV color space and Otsu thresholding method were applied. A feature vector containing 30 color features was extracted from the captured images. A feature selection method based on sensitivity analysis was carried out to select superior features. The selected features were presented to SVM classifier. Various SVM models having a different kernel function were developed and tested. The SVM model having cubic polynomial kernel function and 38 support vectors achieved the best accuracy (99.17% and then was selected to use in online decision-making unit of the system. By launching the online system, it was found that limiting factors of the system capacity were related to the hardware parts of the system (conveyor belt and pneumatic valves used in the sorting unit. The limiting factors led to a distance of 8 mm between the samples. The overall accuracy and capacity of the sorter were obtained 94.33% and 22.74 kg/h, respectively. Keywords: Pistachio kernel, Sorting, Machine vision, Sensitivity analysis, Support vector machine

  13. Infrared machine vision system for the automatic detection of olive fruit quality.

    Science.gov (United States)

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  14. Development of yarn breakage detection software system based on machine vision

    Science.gov (United States)

    Wang, Wenyuan; Zhou, Ping; Lin, Xiangyu

    2017-10-01

    For questions spinning mills and yarn breakage cannot be detected in a timely manner, and save the cost of textile enterprises. This paper presents a software system based on computer vision for real-time detection of yarn breakage. The system and Windows8.1 system Tablet PC, cloud server to complete the yarn breakage detection and management. Running on the Tablet PC software system is designed to collect yarn and location information for analysis and processing. And will be processed after the information through the Wi-Fi and http protocol sent to the cloud server to store in the Microsoft SQL2008 database. In order to follow up on the yarn break information query and management. Finally sent to the local display on time display, and remind the operator to deal with broken yarn. The experimental results show that the system of missed test rate not more than 5%o, and no error detection.

  15. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  16. Detection of Watermelon Seeds Exterior Quality based on Machine Vision

    OpenAIRE

    Xiai Chen; Ling Wang; Wenquan Chen; Yanfeng Gao

    2013-01-01

    To investigate the detection of watermelon seeds exterior quality, a machine vision system based on least square support vector machine was developed. Appearance characteristics of watermelon seeds included area, perimeter, roughness, minimum enclosing rectangle and solidity were calculated by image analysis after image preprocess.The broken seeds, normal seeds and high-quality seeds were distinguished by least square support vector machine optimized by genetic algorithm. Compared to the grid...

  17. Machine-vision based optofluidic cell sorting

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew

    the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the laser catapulted and sorted cells....... machine vision1. This approach is gentler, less invasive and more economical compared to conventional FACS-systems. As cells are less responsive to plastic or glass objects commonly used in the optical manipulation literature2, and since laser safety would be an issue in clinical use, we develop efficient...... approaches in utilizing lasers and light modulation devices. The Generalized Phase Contrast (GPC) method3-9 that can be used for efficiently illuminating spatial light modulators10 or creating well-defined contiguous optical traps11 is supplemented by diffractive techniques capable of integrating...

  18. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  19. X-ray machine vision and computed tomography

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    This survey examines how 2-D x-ray machine vision and 3-D computed tomography will be used in industry in the 1988-1995 timeframe. Specific applications are described and rank-ordered in importance. The types of companies selling and using 2-D and 3-D systems are profiled, and markets are forecast for 1988 to 1995. It is known that many machine vision and automation companies are now considering entering this field. This report looks at the potential pitfalls and whether recent market problems similar to those recently experienced by the machine vision industry will likely occur in this field. FTS will publish approximately 100 other surveys in 1988 on emerging technology in the fields of AI, manufacturing, computers, sensors, photonics, energy, bioengineering, and materials

  20. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  1. Study on excitation and fluorescence spectrums of Japanese citruses to construct machine vision systems for acquiring fluorescent images

    Science.gov (United States)

    Momin, Md. Abdul; Kondo, Naoshi; Kuramoto, Makoto; Ogawa, Yuichi; Shigi, Tomoo

    2011-06-01

    Research was conducted to acquire knowledge of the ultraviolet and visible spectrums from 300 -800 nm of some common varieties of Japanese citrus, to investigate the best wave-lengths for fluorescence excitation and the resulting fluorescence wave-lengths and to provide a scientific background for the best quality fluorescent imaging technique for detecting surface defects of citrus. A Hitachi U-4000 PC-based microprocessor controlled spectrophotometer was used to measure the absorption spectrum and a Hitachi F-4500 spectrophotometer was used for the fluorescence and excitation spectrums. We analyzed the spectrums and the selected varieties of citrus were categorized into four groups of known fluorescence level, namely strong, medium, weak and no fluorescence.The level of fluorescence of each variety was also examined by using machine vision system. We found that around 340-380 nm LEDs or UV lamps are appropriate as lighting devices for acquiring the best quality fluorescent image of the citrus varieties to examine their fluorescence intensity. Therefore an image acquisition device was constructed with three different lighting panels with UV LED at peak 365 nm, Blacklight blue lamps (BLB) peak at 350 nm and UV-B lamps at peak 306 nm. The results from fluorescent images also revealed that the findings of the measured spectrums worked properly and can be used for practical applications such as for detecting rotten, injured or damaged parts of a wide variety of citrus.

  2. Machine Learning Techniques in Clinical Vision Sciences.

    Science.gov (United States)

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration

  3. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  4. Deep learning: Using machine learning to study biological vision

    OpenAIRE

    Majaj, Najib; Pelli, Denis

    2017-01-01

    Today most vision-science presentations mention machine learning. Many neuroscientists use machine learning to decode neural responses. Many perception scientists try to understand recognition by living organisms. To them, machine learning offers a reference of attainable performance based on learned stimuli. This brief overview of the use of machine learning in biological vision touches on its strengths, weaknesses, milestones, controversies, and current directions.

  5. Machine Vision System for Color Sorting Wood Edge-Glued Panel Parts

    Science.gov (United States)

    Qiang Lu; S. Srikanteswara; W. King; T. Drayer; Richard Conners; D. Earl Kline; Philip A. Araman

    1997-01-01

    This paper describes an automatic color sorting system for hardwood edge-glued panel parts. The color sorting system simultaneously examines both faces of a panel part and then determines which face has the "better" color given specified color uniformity and priority defined by management. The real-time color sorting system software and hardware are briefly...

  6. The use of open and machine vision technologies for development of gesture recognition intelligent systems

    Science.gov (United States)

    Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.

    2018-05-01

    The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.

  7. Boosting Economic Growth Through Advanced Machine Vision

    OpenAIRE

    MAAD, Soha; GARBAYA, Samir; AYADI, Nizar; BOUAKAZ, Saida

    2012-01-01

    In this chapter, we overview the potential of machine vision and related technologies in various application domains of critical importance for economic growth and prospect. Considered domains include healthcare, energy and environment, finance, and industrial innovation. Visibility technologies considered encompass augmented and virtual reality, 3D technologies, and media content authoring tools and technologies. We overview the main challenges facing the application domains and discuss the ...

  8. Software organization for a prolog-based prototyping system for machine vision

    Science.gov (United States)

    Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.

    1996-11-01

    We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.

  9. Image formation simulation for computer-aided inspection planning of machine vision systems

    Science.gov (United States)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  10. Design and construction of automatic sorting station with machine vision

    Directory of Open Access Journals (Sweden)

    Oscar D. Velasco-Delgado

    2014-01-01

    Full Text Available This article presents the design, construction and testing of an automatic product sorting system in belt conveyor with machine vision that integrates Free and Open Source Software technology and Allen Bradley commercial equipment. Requirements are defined to determine features such as: mechanics of manufacturing station, an app of product sorting with machine vision and for automation system. For the app of machine vision a library is used for optical digital image processing Open CV, for the mechanical design of the manufacturing station is used the CAD tool Solid Edge and for the design and implementation of automation ISA standards are used along with an automation engineering project methodology integrating a PLC, an inverter, a Panel View and a DeviceNet Network. Performance tests are shown by classifying bottles and PVC pieces in four established types, the behavior of the integrated system is checked so as the efficiency of the same. The processing time on machine vision is 0.290 s on average for a piece of PVC, a capacity of 206 accessories per minute, for bottles was obtained a processing time of 0.267 s, a capacity of 224 bottles per minute. A maximum mechanical performance is obtained with 32 products per minute (1920 products/hour with the conveyor to 22 cm/s and 40 cm of distance between products obtaining an average error of 0.8%.

  11. Machine vision and mechatronics in practice

    CERN Document Server

    Brett, Peter

    2015-01-01

    The contributions for this book have been gathered over several years from conferences held in the series of Mechatronics and Machine Vision in Practice, the latest of which was held in Ankara, Turkey. The essential aspect is that they concern practical applications rather than the derivation of mere theory, though simulations and visualization are important components. The topics range from mining, with its heavy engineering, to the delicate machining of holes in the human skull or robots for surgery on human flesh. Mobile robots continue to be a hot topic, both from the need for navigation and for the task of stabilization of unmanned aerial vehicles. The swinging of a spray rig is damped, while machine vision is used for the control of heating in an asphalt-laying machine.  Manipulators are featured, both for general tasks and in the form of grasping fingers. A robot arm is proposed for adding to the mobility scooter of the elderly. Can EEG signals be a means to control a robot? Can face recognition be ac...

  12. Machine Vision System for Characterizing the Electric Field for the 225 Ra EDM Experiment

    Science.gov (United States)

    Sanchez, Andrew

    2017-09-01

    If an atom or fundamental particle possesses an electric dipole moment (EDM), that would imply time-reversal violation. At our current capability, if an EDM is detected in such a particle, that would suggest the discovery of beyond the standard model (BSM) physics. The unique structure of 225 Ra makes its atomic EDM favorable in the BSM search. An upgraded Ra-EDM apparatus will increase experimental sensitivity and the target electric field of 150 kV/cm will more than double the electric field used in previous experiments. To determine the electric field, the potential difference and electrode separation distance must be known. The optical method I have developed is a high-precision, non-invasive technique to measure electrode separation without making contact with the sensitive electrode surfaces. A digital camera utilizes a bi-telecentric lens to reduce parallax error and produce constant magnification throughout the optical system, regardless of object distance. A monochrome LED backlight enhances sharpness of the electrode profile, reducing uncertainty in edge determination and gap width. A program utilizing an edge detection algorithm allows precise, repeatable measurement of the gap width to within 1% and measurement of the relative angle of the electrodes. This work (SAM, Ra EDM) is supported by Michigan State University. This work (REU Program) is supported by U.S. National Science Foundation under Grant Number #1559866.

  13. Learning surface molecular structures via machine vision

    Science.gov (United States)

    Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.

    2017-08-01

    Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (`read out') all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds and thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. The method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.

  14. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification

    Directory of Open Access Journals (Sweden)

    Friehs Karl

    2008-10-01

    Full Text Available Abstract Background Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. Results This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. Conclusion The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature

  15. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    Science.gov (United States)

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  16. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  17. Machine vision automated visual inspection theory, practice and applications

    CERN Document Server

    Beyerer, Jürgen; Frese, Christian

    2016-01-01

    The book offers a thorough introduction to machine vision. It is organized in two parts. The first part covers the image acquisition, which is the crucial component of most automated visual inspection systems. All important methods are described in great detail and are presented with a reasoned structure. The second part deals with the modeling and processing of image signals and pays particular regard to methods, which are relevant for automated visual inspection.

  18. Machine-vision-based identification of broken inserts in edge profile milling heads

    NARCIS (Netherlands)

    Fernandez Robles, Laura; Azzopardi, George; Alegre, Enrique; Petkov, Nicolai

    This paper presents a reliable machine vision system to automatically detect inserts and determine if they are broken. Unlike the machining operations studied in the literature, we are dealing with edge milling head tools for aggressive machining of thick plates (up to 12 centimetres) in a single

  19. Developing a machine vision system for simultaneous prediction of freshness indicators based on tilapia (Oreochromis niloticus) pupil and gill color during storage at 4°C.

    Science.gov (United States)

    Shi, Ce; Qian, Jianping; Han, Shuai; Fan, Beilei; Yang, Xinting; Wu, Xiaoming

    2018-03-15

    The study assessed the feasibility of developing a machine vision system based on pupil and gill color changes in tilapia for simultaneous prediction of total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA) and total viable counts (TVC) during storage at 4°C. The pupils and gills were chosen and color space conversion among RGB, HSI and L ∗ a ∗ b ∗ color spaces was performed automatically by an image processing algorithm. Multiple regression models were established by correlating pupil and gill color parameters with TVB-N, TVC and TBA (R 2 =0.989-0.999). However, assessment of freshness based on gill color is destructive and time-consuming because gill cover must be removed before images are captured. Finally, visualization maps of spoilage based on pupil color were achieved using image algorithms. The results show that assessment of tilapia pupil color parameters using machine vision can be used as a low-cost, on-line method for predicting freshness during 4°C storage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Gradual Reduction in Sodium Content in Cooked Ham, with Corresponding Change in Sensorial Properties Measured by Sensory Evaluation and a Multimodal Machine Vision System.

    Directory of Open Access Journals (Sweden)

    Kirsti Greiff

    Full Text Available The European diet today generally contains too much sodium (Na(+. A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na(+ was replaced by K(+. The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt, a replacement of Na(+-ions by K(+-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7-1.4% salt, led to a decrease in WHC and an increase in expressible moisture.

  1. Gradual Reduction in Sodium Content in Cooked Ham, with Corresponding Change in Sensorial Properties Measured by Sensory Evaluation and a Multimodal Machine Vision System.

    Science.gov (United States)

    Greiff, Kirsti; Mathiassen, John Reidar; Misimi, Ekrem; Hersleth, Margrethe; Aursand, Ida G

    2015-01-01

    The European diet today generally contains too much sodium (Na(+)). A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na(+) was replaced by K(+). The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt), a replacement of Na(+)-ions by K(+)-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7-1.4% salt, led to a decrease in WHC and an increase in expressible moisture.

  2. Using a vision cognitive algorithm to schedule virtual machines

    Directory of Open Access Journals (Sweden)

    Zhao Jiaqi

    2014-09-01

    Full Text Available Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption

  3. INFIBRA: machine vision inspection of acrylic fiber production

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  4. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  5. Low Vision Enhancement System

    Science.gov (United States)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  6. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  7. Simulation of the «COSMONAUT-ROBOT» System Interaction on the Lunar Surface Based on Methods of Machine Vision and Computer Graphics

    Science.gov (United States)

    Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.

    2017-05-01

    Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.

  8. Integration of USB and firewire cameras in machine vision applications

    Science.gov (United States)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  9. Trends and developments in industrial machine vision: 2013

    Science.gov (United States)

    Niel, Kurt; Heinzl, Christoph

    2014-03-01

    When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own

  10. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  11. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. Machine protection systems

    CERN Document Server

    Macpherson, A L

    2010-01-01

    A summary of the Machine Protection System of the LHC is given, with particular attention given to the outstanding issues to be addressed, rather than the successes of the machine protection system from the 2009 run. In particular, the issues of Safe Machine Parameter system, collimation and beam cleaning, the beam dump system and abort gap cleaning, injection and dump protection, and the overall machine protection program for the upcoming run are summarised.

  13. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  14. Machine Vision Technology for the Forest Products Industry

    Science.gov (United States)

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer

    1997-01-01

    From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...

  15. Ethical, environmental and social issues for machine vision in manufacturing industry

    Science.gov (United States)

    Batchelor, Bruce G.; Whelan, Paul F.

    1995-10-01

    Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.

  16. Machine vision applications for physical security, quality assurance and personnel dosimetry

    International Nuclear Information System (INIS)

    Kar, S.; Shrikhande, S.V.; Suresh Babu, R.M.

    2016-01-01

    Machine vision is the technology used to provide imaging-based solutions to variety of applications, relevant to nuclear facilities and other industries. It uses computerized image analysis for automatic inspection, process control, object sorting, parts assembly, human identity authentication, and so on. In this article we discuss the in-house developed machine vision systems at EISD, BARC for three specific areas: Biometric recognition for physical security, visual inspection for QA of fuel pellets, and fast neutron personnel dosimetry. The advantages in using these systems include objective decision making, reduced man-rem, operational consistency, and capability of statistical quantitative analysis. (author)

  17. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  18. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  19. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  20. Machine vision based quality inspection of flat glass products

    Science.gov (United States)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  1. Software architecture for time-constrained machine vision applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  2. A method of size inspection for fruit with machine vision

    Science.gov (United States)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. An image was binary, and the edge was detected with line-scanned based digit image description, and the MER was applied to detected size of the fruit, but failed. The reason for the result was that the test point with MER was different from which was done with vernier caliper. An improved method was developed, which was called as software vernier caliper. A line between weight O of the fruit and a point A on the edge was drawn, and then the crossed point between line OA and the edge was calculated, which was noted as B, a point C between AB was selected, and the point D on the other side was searched by a way to make CD was vertical to AB, by move the point C between point A and B, A new point D was searched. The maximum length of CD was recorded as an extremum value. By move point A from start to the half point on the edge, a serial of CD was gotten. 80 navel oranges were tested, the maximum error of the diameter was less than 1mm.

  3. Machine vision inspection of lace using a neural network

    Science.gov (United States)

    Sanby, Christopher; Norton-Wayne, Leonard

    1995-03-01

    Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.

  4. Fire protection for launch facilities using machine vision fire detection

    Science.gov (United States)

    Schwartz, Douglas B.

    1993-02-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  5. Practical guide to machine vision software an introduction with LabVIEW

    CERN Document Server

    Kwon, Kye-Si

    2014-01-01

    For both students and engineers in R&D, this book explains machine vision in a concise, hands-on way, using the Vision Development Module of the LabView software by National Instruments. Following a short introduction to the basics of machine vision and the technical procedures of image acquisition, the book goes on to guide readers in the use of the various software functions of LabView's machine vision module. It covers typical machine vision tasks, including particle analysis, edge detection, pattern and shape matching, dimension measurements as well as optical character recognition, enabli

  6. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  7. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  8. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  9. Bionic machines and systems

    Energy Technology Data Exchange (ETDEWEB)

    Halme, A.; Paanajaervi, J. (eds.)

    2004-07-01

    Introduction Biological systems form a versatile and complex entirety on our planet. One evolutionary branch of primates, called humans, has created an extraordinary skill, called technology, by the aid of which it nowadays dominate life on the planet. Humans use technology for producing and harvesting food, healthcare and reproduction, increasing their capability to commute and communicate, defending their territory etc., and to develop more technology. As a result of this, humans have become much technology dependent, so that they have been forced to form a specialized class of humans, called engineers, who take care of the knowledge of technology developing it further and transferring it to later generations. Until now, technology has been relatively independent from biology, although some of its branches, e.g. biotechnology and biomedical engineering, have traditionally been in close contact with it. There exist, however, an increasing interest to expand the interface between technology and biology either by directly utilizing biological processes or materials by combining them with 'dead' technology, or by mimicking in technological solutions the biological innovations created by evolution. The latter theme is in focus of this report, which has been written as the proceeding of the post-graduate seminar 'Bionic Machines and Systems' held at HUT Automation Technology Laboratory in autumn 2003. The underlaying idea of the seminar was to analyze biological species by considering them as 'robotic machines' having various functional subsystems, such as for energy, motion and motion control, perception, navigation, mapping and localization. We were also interested about intelligent capabilities, such as learning and communication, and social structures like swarming behavior and its mechanisms. The word 'bionic machine' comes from the book which was among the initial material when starting our mission to the fascinating world

  10. Intelligent Machine Vision for Automated Fence Intruder Detection Using Self-organizing Map

    OpenAIRE

    Veldin A. Talorete Jr.; Sherwin A Guirnaldo

    2017-01-01

    This paper presents an intelligent machine vision for automated fence intruder detection. A series of still captured images that contain fence events using Internet Protocol cameras was used as input data to the system. Two classifiers were used; the first is to classify human posture and the second one will classify intruder location. The system classifiers were implemented using Self-Organizing Map after the implementation of several image segmentation processes. The human posture classifie...

  11. Protyping machine vision software on the World Wide Web

    Science.gov (United States)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  12. Toward The Robot Eye: Isomorphic Representation For Machine Vision

    Science.gov (United States)

    Schenker, Paul S.

    1981-10-01

    This paper surveys some issues confronting the conception of models for general purpose vision systems. We draw parallels to requirements of human performance under visual transformations naturally occurring in the ecological environment. We argue that successful real world vision systems require a strong component of analogical reasoning. We propose a course of investigation into appropriate models, and illustrate some of these proposals by a simple example. Our study emphasizes the potential importance of isomorphic representations - models of image and scene which embed a metric of their respective spaces, and whose topological structure facilitates identification of scene descriptors that are invariant under viewing transformations.

  13. Fully automatic CNC machining production system

    Directory of Open Access Journals (Sweden)

    Lee Jeng-Dao

    2017-01-01

    Full Text Available Customized manufacturing is increasing years by years. The consumption habits change has been cause the shorter of product life cycle. Therefore, many countries view industry 4.0 as a target to achieve more efficient and more flexible automated production. To develop an automatic loading and unloading CNC machining system via vision inspection is the first step in industrial upgrading. CNC controller is adopted as the main controller to command to the robot, conveyor, and other equipment in this study. Moreover, machine vision systems are used to detect position of material on the conveyor and the edge of the machining material. In addition, Open CNC and SCADA software will be utilized to make real-time monitor, remote system of control, alarm email notification, and parameters collection. Furthermore, RFID has been added to employee classification and management. The machine handshaking has been successfully proposed to achieve automatic vision detect, edge tracing measurement, machining and system parameters collection for data analysis to accomplish industrial automation system integration with real-time monitor.

  14. Machine learning systems

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, R

    1984-05-01

    With the dramatic rise of expert systems has come a renewed interest in the fuel that drives them-knowledge. For it is specialist knowledge which gives expert systems their power. But extracting knowledge from human experts in symbolic form has proved arduous and labour-intensive. So the idea of machine learning is enjoying a renaissance. Machine learning is any automatic improvement in the performance of a computer system over time, as a result of experience. Thus a learning algorithm seeks to do one or more of the following: cover a wider range of problems, deliver more accurate solutions, obtain answers more cheaply, and simplify codified knowledge. 6 references.

  15. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  16. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    Science.gov (United States)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  17. Smart Machine Protection System

    International Nuclear Information System (INIS)

    Clark, S.; Nelson, D.; Grillo, A.; Spencer, N.; Hutchinson, D.; Olsen, J.; Millsom, D.; White, G.; Gromme, T.; Allison, S.; Underwood, K.; Zelazny, M.; Kang, H.

    1991-11-01

    A Machine Protection System implemented on the SLC automatically controls the beam repetition rates in the accelerator so that radiation or temperature faults slow the repetition rate to bring the fault within tolerance without shutting down the machine. This process allows the accelerator to aid in the fault diagnostic process, and the protection system automatically restores the beams back to normal rates when the fault is diagnosed and corrected. The user interface includes facilities to monitor the performance of the system, and track rate limits, faults, and recoveries. There is an edit facility to define the devices to be included in the protection system, along with their set points, limits, and trip points. This set point and limit data is downloaded into the CAMAC modules, and the configuration data is compiled into a logical decision tree for the 68030 processor. 3 figs

  18. Smart machine protection system

    International Nuclear Information System (INIS)

    Clark, S.; Nelson, D.; Grillo, A.

    1992-01-01

    A Machine Protection System implemented on the SLC automatically controls the beam repetition rates in the accelerator so that radiation or temperature faults slow the repetition rate to bring the fault within tolerance without shutting down the machine. This process allows the accelerators to aid in the fault diagnostic process, and the protection system automatically restores the beams back to normal rates when the fault is diagnosed and corrected. The user interface includes facilities to monitor the performance of the system, and track rate limits, faults, and recoveries. There is an edit facility to define the devices to be included in the protection system, along with their set points, limits, and trip points. This set point and limit data is downloaded into the CAMAC modules, and the configuration data is complied into a logical decision tree for the 68030 processor. (author)

  19. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  20. Development of the Triple Theta assembly station with machine vision feedback

    International Nuclear Information System (INIS)

    Schmidt, Derek William

    2008-01-01

    Increased requirements for tighter tolerances on assembled target components in complex three-dimensional geometries with only days to assemble complete campaigns require the implementation of a computer-controlled high-precision assembly station. Over the last year, an 11-axis computer-controlled assembly station has been designed and built with custom software to handle the multiple coordinate systems and automatically calculate all relational positions. Preliminary development efforts have also been done to explore the benefit of a machine vision feedback module with a dual-camera viewing system to automate certain basic features like crosshair calibration, component leveling, and component centering.

  1. Considerations for implementing machine vision for detecting watercore in apples

    Science.gov (United States)

    Upchurch, Bruce L.; Throop, James A.

    1993-05-01

    Watercore in apples is a physiological disorder that affects the internal quality of the fruit. Growers can experience serious economic losses due to internal breakdown of the apple if watercored apples are placed unknowingly into long term storage. Economic losses can also occur if watercore is detected and the entire `lot' is downgraded; however, a gain can be obtained if watercored fruit is segregated and marketed as a premium apple soon after harvest. Watercore is characterized by the accumulation of fluid around the vascular bundles replacing air spaces between cells. This fluid reduces the light scattering properties of the apple. Using machine vision to measure the amount of light transmitted through the apple, watercored apples were segregated according to the severity of damage. However, the success of the method was dependent upon two factors. First, the sensitivity of the camera dictated the classes of watercore that could be detected. A highly sensitive camera could separate the less severe classes at the expense of not distinguishing between the more severe classes. A second factor which is common to most quality attributes in perishable commodities is the elapsed time after harvest at which the measurement was made. At the end of the study, light transmission levels decreased to undetectable levels with the initial camera settings for all watercore classes.

  2. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  3. Tensor Voting A Perceptual Organization Approach to Computer Vision and Machine Learning

    CERN Document Server

    Mordohai, Philippos

    2006-01-01

    This lecture presents research on a general framework for perceptual organization that was conducted mainly at the Institute for Robotics and Intelligent Systems of the University of Southern California. It is not written as a historical recount of the work, since the sequence of the presentation is not in chronological order. It aims at presenting an approach to a wide range of problems in computer vision and machine learning that is data-driven, local and requires a minimal number of assumptions. The tensor voting framework combines these properties and provides a unified perceptual organiza

  4. Current Technologies and its Trends of Machine Vision in the Field of Security and Disaster Prevention

    Science.gov (United States)

    Hashimoto, Manabu; Fujino, Yozo

    Image sensing technologies are expected as useful and effective way to suppress damages by criminals and disasters in highly safe and relieved society. In this paper, we describe current important subjects, required functions, technical trends, and a couple of real examples of developed system. As for the video surveillance, recognition of human trajectory and human behavior using image processing techniques are introduced with real examples about the violence detection for elevators. In the field of facility monitoring technologies as civil engineering, useful machine vision applications such as automatic detection of concrete cracks on walls of a building or recognition of crowded people on bridge for effective guidance in emergency are shown.

  5. Accuracy of locating circular features using machine vision

    Science.gov (United States)

    Sklair, Cheryl W.; Hoff, William A.; Gatrell, Lance B.

    1992-03-01

    The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use of circular target features as the features that can be most accurately located. This paper describes extensive analysis on circle centroid accuracy using both simulations and laboratory measurements. The work was part of an effort to design a video positioning sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To characterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.

  6. Recent advances in the development and transfer of machine vision technologies for space

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  7. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  8. Applications of color machine vision in the agricultural and food industries

    Science.gov (United States)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  9. Using a vision cognitive algorithm to schedule virtual machines

    OpenAIRE

    Zhao Jiaqi; Mhedheb Yousri; Tao Jie; Jrad Foued; Liu Qinghuai; Streit Achim

    2014-01-01

    Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the...

  10. The reported incidence of man-machine interface issues in Army aviators using the Aviator's Night Vision System (ANVIS) in a combat theatre

    Science.gov (United States)

    Hiatt, Keith L.; Rash, Clarence E.

    2011-06-01

    Background: Army Aviators rely on the ANVIS for night operations. Human factors literature notes that the ANVIS man-machine interface results in reports of visual and spinal complaints. This is the first study that has looked at these issues in the much harsher combat environment. Last year, the authors reported on the statistically significant (pEnduring Freedom (OEF). Results: 82 Aircrew (representing an aggregate of >89,000 flight hours of which >22,000 were with ANVIS) participated. Analysis demonstrated high complaints of almost all levels of back and neck pain. Additionally, the use of body armor and other Aviation Life Support Equipment (ALSE) caused significant ergonomic complaints when used with ANVIS. Conclusions: ANVIS use in a combat environment resulted in higher and different types of reports of spinal symptoms and other man-machine interface issues over what was previously reported. Data from this study may be more operationally relevant than that of the peacetime literature as it is derived from actual combat and not from training flights, and it may have important implications about making combat predictions based on performance in training scenarios. Notably, Aircrew remarked that they could not execute the mission without ANVIS and ALSE and accepted the degraded ergonomic environment.

  11. Man Machine Systems in Education.

    Science.gov (United States)

    Sall, Malkit S.

    This review of the research literature on the interaction between humans and computers discusses how man machine systems can be utilized effectively in the learning-teaching process, especially in secondary education. Beginning with a definition of man machine systems and comments on the poor quality of much of the computer-based learning material…

  12. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    Science.gov (United States)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  13. Fast and intuitive programming of adaptive laser cutting of lace enabled by machine vision

    Science.gov (United States)

    Vaamonde, Iago; Souto-López, Álvaro; García-Díaz, Antón

    2015-07-01

    A machine vision system has been developed, validated, and integrated in a commercial laser robot cell. It permits an offline graphical programming of laser cutting of lace. The user interface allows loading CAD designs and aligning them with images of lace pieces. Different thread widths are discriminated to generate proper cutting program templates. During online operation, the system aligns CAD models of pieces and lace images, pre-checks quality of lace cuts and adapts laser parameters to thread widths. For pieces detected with the required quality, the program template is adjusted by transforming the coordinates of every trajectory point. A low-cost lace feeding system was also developed for demonstration of full process automation.

  14. Magnetic imaging and machine vision NDT for the on-line inspection of stainless steel strips

    International Nuclear Information System (INIS)

    Ricci, M; Ficola, A; Fravolini, M L; Battaglini, L; Palazzi, A; Burrascano, P; Valigi, P; Appolloni, L; Cervo, S; Rocchi, C

    2013-01-01

    An on-line inspection system for stainless steel strips has been developed on an annealing and pickling line at the Acciai Speciali Terni S.p.A. steel mill. Besides a machine vision apparatus, the system contextually exploits a magnetic imaging system designed and realized for the specific application. The main goal of the research is represented by the fusion of the information provided by the two apparatuses that can improve the detection and classification tasks by enlarging the set of detectable defects. In this paper, the development, the calibration and the characteristics of the magnetic imaging apparatus are detailed and experimental results obtained both in laboratory and in situ are reported. A comparative analysis of the performances of the two devices is also reported based on preliminary results and some conclusions and perspectives are drawn. (paper)

  15. Machine vision method for online surface inspection of easy open can ends

    Science.gov (United States)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  16. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  17. Potential application of machine vision technology to saffron (Crocus sativus L.) quality characterization.

    Science.gov (United States)

    Kiani, Sajad; Minaei, Saeid

    2016-12-01

    Saffron quality characterization is an important issue in the food industry and of interest to the consumers. This paper proposes an expert system based on the application of machine vision technology for characterization of saffron and shows how it can be employed in practical usage. There is a correlation between saffron color and its geographic location of production and some chemical attributes which could be properly used for characterization of saffron quality and freshness. This may be accomplished by employing image processing techniques coupled with multivariate data analysis for quantification of saffron properties. Expert algorithms can be made available for prediction of saffron characteristics such as color as well as for product classification. Copyright © 2016. Published by Elsevier Ltd.

  18. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  19. The systematic development of a machine vision based milking robot

    NARCIS (Netherlands)

    Gouws, J.

    1993-01-01

    Agriculture involves unique interactions between man, machines, and various elements from nature. Therefore the implementation of advanced technology in agriculture holds different challenges than in other sectors of the economy. This dissertation stems from research into the application of

  20. Missileborne Artificial Vision System (MAVIS)

    Science.gov (United States)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  1. Slide system for machine tools

    Science.gov (United States)

    Douglass, Spivey S.; Green, Walter L.

    1982-01-01

    The present invention relates to a machine tool which permits the machining of nonaxisymmetric surfaces on a workpiece while rotating the workpiece about a central axis of rotation. The machine tool comprises a conventional two-slide system (X-Y) with one of these slides being provided with a relatively short travel high-speed auxiliary slide which carries the material-removing tool. The auxiliary slide is synchronized with the spindle speed and the position of the other two slides and provides a high-speed reciprocating motion required for the displacement of the cutting tool for generating a nonaxisymmetric surface at a selected location on the workpiece.

  2. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  3. Gall mite inspection on dormant black currant buds using machine vision

    DEFF Research Database (Denmark)

    Nielsen, M. R.; Stigaard Laursen, Morten; Jonassen, M. S.

    2013-01-01

    This paper presents a novel machine vision-based approach detecting and mapping gall mite infection in dormant buds on black currant bushes. A vehicle was fitted with four cameras and RTK-GPS. Results compared automatic detection to human decisions based on the images, and by mapping the results...

  4. Reflections on the Development of a Machine Vision Technology for the Forest Products

    Science.gov (United States)

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    The authors have approximately 25 years experience in developing machine vision technology for the forest products industry. Based on this experience this paper will attempt to realistically predict what the future holds for this technology. In particular, this paper will attempt to describe some of the benefits this technology will offer, describe how the technology...

  5. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    International Nuclear Information System (INIS)

    Ilyas, Ismet P

    2013-01-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  6. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  7. The use of holographic and diffractive optics for optimized machine vision illumination for critical dimension inspection

    Science.gov (United States)

    Lizotte, Todd E.; Ohar, Orest

    2004-02-01

    Illuminators used in machine vision applications typically produce non-uniform illumination onto the targeted surface being observed, causing a variety of problems with machine vision alignment or measurement. In most circumstances the light source is broad spectrum, leading to further problems with image quality when viewed through a CCD camera. Configured with a simple light bulb and a mirrored reflector and/or frosted glass plates, these general illuminators are appropriate for only macro applications. Over the last 5 years newer illuminators have hit the market including circular or rectangular arrays of high intensity light emitting diodes. These diode arrays are used to create monochromatic flood illumination of a surface that is to be inspected. The problem with these illumination techniques is that most of the light does not illuminate the desired areas, but broadly spreads across the surface, or when integrated with diffuser elements, tend to create similar shadowing effects to the broad spectrum light sources. In many cases a user will try to increase the performance of these illuminators by adding several of these assemblies together, increasing the intensity or by moving the illumination source closer or farther from the surface being inspected. In this case these non-uniform techniques can lead to machine vision errors, where the computer machine vision may read false information, such as interpreting non-uniform lighting or shadowing effects as defects. This paper will cover a technique involving the use of holographic / diffractive hybrid optical elements that are integrated into standard and customized light sources used in the machine vision industry. The bulk of the paper will describe the function and fabrication of the holographic/diffractive optics and how they can be tailored to improve illuminator design. Further information will be provided a specific design and examples of it in operation will be disclosed.

  8. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision

    OpenAIRE

    Chao-Ching Ho; Dung-Sheng Wu

    2018-01-01

    Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was p...

  9. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  10. Automatic pellet density checking machine using vision technique

    International Nuclear Information System (INIS)

    Kumar, Suman; Raju, Y.S.; Raj Kumar, J.V.; Sairam, S.; Sheela; Hemantha Rao, G.V.S.

    2012-01-01

    Uranium di-oxide powder prepared through chemical process is converted to green pellets through the powder metallurgy route of precompaction and final compaction operations. These green pellets are kept in a molybdenum boat, which consists of a molybdenum base and a shroud. The boats are passed through the high temperature sintering furnaces to achieve required density of pellets. At present MIL standard 105 E is followed for measuring density of sintered pellets in the boat. As per AQL 2.5 of MIL standard, five pellets are collected from each boat, which contains approximately 800 nos of pellets. The densities of these collected pellets are measured. If anyone pellet density is less than the required value, the entire boat of pellets are rejected and sent back for dissolution for further processing. An Automatic Pellet Density Checking Machine (APDCM) was developed to salvage the acceptable density pellets from the rejected boat of pellets

  11. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    Science.gov (United States)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision

  12. Express quality control of chicken eggs by machine vision

    Science.gov (United States)

    Gorbunova, Elena V.; Chertov, Aleksandr N.; Peretyagin, Vladimir S.; Korotaev, Valery V.; Arbuzova, Evgeniia A.

    2017-06-01

    The urgency of the task of analyzing the foodstuffs quality is determined by the strategy for the formation of a healthy lifestyle and the rational nutrition of the world population. This applies to products, such as chicken eggs. In particular, it is necessary to control the chicken eggs quality at the farm production prior to incubation in order to eliminate the possible hereditary diseases, as well as high embryonic mortality and a sharp decrease in the quality of the bred young. Up to this day, in the market there are no objective instruments of contactless express quality control as analytical equipment that allow the high-precision quality examination of the chicken eggs, which is determined by the color parameters of the eggshell (color uniformity) and yolk of eggs, and by the presence in the eggshell of various defects (cracks, growths, wrinkles, dirty). All mentioned features are usually evaluated only visually (subjectively) with the help of normalized color standards and ovoscopes. Therefore, this work is devoted to the investigation of the application opportunities of contactless express control method with the help of technical vision to implement the chicken eggs' quality analysis. As a result of the studies, a prototype with the appropriate software was proposed. Experimental studies of this equipment on a representative sample of eggs from chickens of different breeds have been carried out (the total number of analyzed samples exceeds 300 pieces). The correctness of the color analysis was verified by spectrophotometric studies of the surface of the eggshell.

  13. Controls and Machine Protection Systems

    CERN Document Server

    Carrone, E.

    2016-01-01

    Machine protection, as part of accelerator control systems, can be managed with a 'functional safety' approach, which takes into account product life cycle, processes, quality, industrial standards and cybersafety. This paper will discuss strategies to manage such complexity and the related risks, with particular attention to fail-safe design and safety integrity levels, software and hardware standards, testing, and verification philosophy. It will also discuss an implementation of a machine protection system at the SLAC National Accelerator Laboratory's Linac Coherent Light Source (LCLS).

  14. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  15. Intelligent Machine Vision for Automated Fence Intruder Detection Using Self-organizing Map

    Directory of Open Access Journals (Sweden)

    Veldin A. Talorete Jr.

    2017-03-01

    Full Text Available This paper presents an intelligent machine vision for automated fence intruder detection. A series of still captured images that contain fence events using Internet Protocol cameras was used as input data to the system. Two classifiers were used; the first is to classify human posture and the second one will classify intruder location. The system classifiers were implemented using Self-Organizing Map after the implementation of several image segmentation processes. The human posture classifier is in charge of classifying the detected subject’s posture patterns from subject’s silhouette. Moreover, the Intruder Localization Classifier is in charge of classifying the detected pattern’s location classifier will estimate the location of the intruder with respect to the fence using geometric feature from images as inputs. The system is capable of activating the alarm, display the actual image and depict the location of the intruder when an intruder is detected. In detecting intruder posture, the system’s success rate of 88%. Overall system accuracy for day-time intruder localization is 83% and an accuracy of 88% for night-time intruder localization

  16. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  17. Quantificação da falha na madeira em juntas coladas utilizando técnicas de visão artificial Measuring wood failure percentage using a machine vision system

    Directory of Open Access Journals (Sweden)

    Christovão Pereira Abrahão

    2003-02-01

    Full Text Available Com o emprego de adesivos pode-se obter um grande número de produtos derivados da madeira. Para confecção industrial de produtos de madeira colada, normas reconhecidas internacionalmente exigem que a adesão da madeira seja testada segundo procedimentos padronizados e que nos resultados destes testes seja reportado, além da resistência das juntas, o porcentual de falha na madeira. Para avaliação da falha a norma ASTM D5266-99 recomenda o emprego de uma rede de quadrículas traçada sobre um material transparente. Contudo, esta avaliação, além de demandar muito tempo, ainda é realizada com muita subjetividade. A hipótese do presente trabalho é que se pode quantificar a falha na madeira com um sistema de visão artificial, tornando o procedimento mais rápido e menos sujeito à subjetividade. Foram testados dois tipos de algoritmos de limiarização automática em imagens adquiridas com digitalizadores de mesa. Concluiu-se que a falha na madeira pode ser quantificada por limiarização automática em substituição ao método convencional das quadrículas. Os algoritmos testados apresentaram erro médio absoluto menor que 3% em relação ao sistema convencional da rede quadriculada.It is possible to obtain several products by glueing wood. Internationally approved standards require wood adhesion to be tested according to standardized procedures, including in the results, shear stress and wood failure percentages. In order to estimate wood failure percentage, the ASTM D5266-99 standard suggests the use of a grid template printed on a transparent sheet. However, this evaluation is not only time-consuming but also subjective. This work developed and tested an algorithm to quantify the flawed wood areas by using a machine vision system, a faster and less subjective procedure. Two types of automatic threshold algorithms were tested. The glued wood samples were scanned after the shear tests under compression. It was concluded that automatic

  18. An explainable deep machine vision framework for plant stress phenotyping.

    Science.gov (United States)

    Ghosal, Sambuddha; Blystone, David; Singh, Asheesh K; Ganapathysubramanian, Baskar; Singh, Arti; Sarkar, Soumik

    2018-05-01

    Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework's ability to identify and classify a diverse set of foliar stresses in soybean [ Glycine max (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers. Copyright © 2018 the Author(s). Published by PNAS.

  19. An explainable deep machine vision framework for plant stress phenotyping

    Science.gov (United States)

    Blystone, David; Ganapathysubramanian, Baskar; Singh, Arti; Sarkar, Soumik

    2018-01-01

    Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework’s ability to identify and classify a diverse set of foliar stresses in soybean [Glycine max (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers. PMID:29666265

  20. Health system vision of iran in 2025.

    Science.gov (United States)

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  1. Quality Evaluation for Appearance of Needle Green Tea Based on Machine Vision and Process Parameters

    DEFF Research Database (Denmark)

    Dong, Chunwang; Zhu, Hongkai; Zhou, Xiaofen

    2017-01-01

    ), extreme learning machine (ELM) and strong predictor integration algorithm (ELM-AdaBoost). The comparison of the results showed that the ELM-AdaBoost model based on image characteristics had the best performance (RPD was more than 2). Its predictive performance was superior to other models, with smaller......, and modeling faster (0.014~0.281 s). AdaBoost method, which was a hybrid integrated algorithm, can further promote the accuracy and generalization capability of the model. The above conclusions indicated that it was feasible to evaluate the quality of appearance of needle green tea based on machine vision...

  2. A new method of machine vision reprocessing based on cellular neural networks

    International Nuclear Information System (INIS)

    Jianhua, W.; Liping, Z.; Fenfang, Z.; Guojian, H.

    1996-01-01

    This paper proposed a method of image preprocessing in machine vision based on Cellular Neural Network (CNN). CNN is introduced to design image smoothing, image recovering, image boundary detecting and other image preprocessing problems. The proposed methods are so simple that the speed of algorithms are increased greatly to suit the needs of real-time image processing. The experimental results show a satisfactory reply

  3. Fuzzy classification for strawberry diseases-infection using machine vision and soft-computing techniques

    Science.gov (United States)

    Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil

    2018-04-01

    Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.

  4. Whole surface image reconstruction for machine vision inspection of fruit

    Science.gov (United States)

    Reese, D. Y.; Lefcourt, A. M.; Kim, M. S.; Lo, Y. M.

    2007-09-01

    Automated imaging systems offer the potential to inspect the quality and safety of fruits and vegetables consumed by the public. Current automated inspection systems allow fruit such as apples to be sorted for quality issues including color and size by looking at a portion of the surface of each fruit. However, to inspect for defects and contamination, the whole surface of each fruit must be imaged. The goal of this project was to develop an effective and economical method for whole surface imaging of apples using mirrors and a single camera. Challenges include mapping the concave stem and calyx regions. To allow the entire surface of an apple to be imaged, apples were suspended or rolled above the mirrors using two parallel music wires. A camera above the apples captured 90 images per sec (640 by 480 pixels). Single or multiple flat or concave mirrors were mounted around the apple in various configurations to maximize surface imaging. Data suggest that the use of two flat mirrors provides inadequate coverage of a fruit but using two parabolic concave mirrors allows the entire surface to be mapped. Parabolic concave mirrors magnify images, which results in greater pixel resolution and reduced distortion. This result suggests that a single camera with two parabolic concave mirrors can be a cost-effective method for whole surface imaging.

  5. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  6. Beef identification in industrial slaughterhouses using machine vision techniques

    Directory of Open Access Journals (Sweden)

    J. F. Velez

    2013-10-01

    Full Text Available Accurate individual animal identification provides the producers with useful information to take management decisions about an individual animal or about the complete herd. This identification task is also important to ensure the integrity of the food chain. Consequently, many consumers are turning their attention to issues of quality in animal food production methods. This work describes an implemented solution for individual beef identification, taking in the time from cattle shipment arrival at the slaughterhouse until the animals are slaughtered and cut up. Our beef identification approach is image-based and the pursued goals are the correct automatic extraction and matching between some numeric information extracted from the beef ear-tag and the corresponding one from the Bovine Identification Document (BID. The achieved correct identification results by our method are near 90%, by considering the practical working conditions of slaughterhouses (i.e. problems with dirt and bad illumination conditions. Moreover, the presence of multiple machinery in industrial slaughterhouses make it difficult the use of Radio Frequency Identification (RFID beef tags due to the high risks of interferences between RFID and the other technologies in the workplace. The solution presented is hardware/software since it includes a specialized hardware system that was also developed. Our approach considers the current EU legislation for beef traceability and it reduces the economic cost of individual beef identification with respect to RFID transponders. The system implemented has been in use satisfactorily for more than three years in one of the largest industrial slaughterhouses in Spain.

  7. Characterisation of flotation froth colour and structure by machine vision

    Science.gov (United States)

    Bonifazi, Giuseppe; Serranti, Silvia; Volpe, Fabio; Zuco, Riccardo

    2001-11-01

    It is well known and well recognised that flotation is a process that is complex to monitor and study if a classical approach based on the evaluation of the signals resulting from sensors is adopted. Sensors are usually strategically positioned in the bank cells and detect global process variables such as pH, reagent addition, froth level, on-stream chemical analysis, particle size distribution, etc. In the last ten years several studies have been carried out with the main goal to utilise imaging techniques to detect froth bubbles characteristics and to evaluate the flotation process performance. In this paper an approach of this type is described. More specifically, image processing techniques to automatically measure the colour and the structure of the froth bubbles are presented and the results are discussed. All the investigations are carried out on digital sample images collected in an industrial flotation plant operating in steady-state conditions. The colour analysis is performed on the whole surface of the froth images considering different colour reference systems (RGB, HSV, HSI); the morphological measurements are obtained after the application of selected enhancement and segmentation techniques, necessary to consider the bubbles as separate domains. The multiple correlation analysis performed between froth mineral concentrations (Cu, MgO, Zn and Pb content) and the extracted colour and structure parameters are good in most situations.

  8. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  9. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  10. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  11. Machine Vision based Micro-crack Inspection in Thin-film Solar Cell Panel

    Directory of Open Access Journals (Sweden)

    Zhang Yinong

    2014-09-01

    Full Text Available Thin film solar cell consists of various layers so the surface of solar cell shows heterogeneous textures. Because of this property the visual inspection of micro-crack is very difficult. In this paper, we propose the machine vision-based micro-crack detection scheme for thin film solar cell panel. In the proposed method, the crack edge detection is based on the application of diagonal-kernel and cross-kernel in parallel. Experimental results show that the proposed method has better performance of micro-crack detection than conventional anisotropic model based methods on a cross- kernel.

  12. A neurite quality index and machine vision software for improved quantification of neurodegeneration.

    Science.gov (United States)

    Romero, Peggy; Miller, Ted; Garakani, Arman

    2009-12-01

    Current methods to assess neurodegradation in dorsal root ganglion cultures as a model for neurodegenerative diseases are imprecise and time-consuming. Here we describe two new methods to quantify neuroprotection in these cultures. The neurite quality index (NQI) builds upon earlier manual methods, incorporating additional morphological events to increase detection sensitivity for the detection of early degeneration events. Neurosight is a machine vision-based method that recapitulates many of the strengths of NQI while enabling high-throughput screening applications with decreased costs.

  13. The Intangible Assets Advantages in the Machine Vision Inspection of Thermoplastic Materials

    Science.gov (United States)

    Muntean, Diana; Răulea, Andreea Simina

    2017-12-01

    Innovation is not a simple concept but is the main source of success. It is more important to have the right people and mindsets in place than to have a perfectly crafted plan in order to make the most out of an idea or business. The aim of this paper is to emphasize the importance of intangible assets when it comes to machine vision inspection of thermoplastic materials pointing out some aspects related to knowledge based assets and their need for a success idea to be developed in a successful product.

  14. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  15. Nuclear reactor machine refuelling system

    International Nuclear Information System (INIS)

    Cashen, W.S.; Erwin, D.

    1977-01-01

    Part of an on-line fuelling machine for a CANDU pressure-tube reactor is described. The present invention provides a refuelling machine wherein the fuelling components, including the fuel carrier and the closure adapter, are positively positioned and retained within the machine magazine or positively secured to the machine charge tube head, and cannot be accidentally disengaged as in former practice. The positive positioning devices include an arcuate keeper plate. Simplified hooked fingers are used. (NDH)

  16. Superconducting magnetic systems and electrical machines

    International Nuclear Information System (INIS)

    Glebov, I.A.

    1975-01-01

    The use of superconductors for magnets and electrical machines attracts close attention of designers and scientists. A description is given of an ongoing research program to create superconductive magnetic systems, commutator motors, homopolar machines, topological generators and turbogenerators with superconductive field windings. All the machines are tentative experimental models and serve as a basis for further developments

  17. Inspecting a research reactor's control rod surface for pitting using a machine vision

    International Nuclear Information System (INIS)

    Tokuhiro, Akira T.; Vadakattu, Shreekanth

    2005-01-01

    Inspection for pits on the control rod is performed to study the degradation of the control rod material which helps estimating the service life of the control rod at UMR nuclear reactor (UMRR). This inspection task is visually inspected and recorded subjectively. The conventional visual inspection to identify pits on the control rod surface can be automated using machine vision technique. Since the in-service control rods were not available to capture images and measure number of pits and size of the pits, the applicability of machine vision method was applied on SAE 1018 steel coupons immersed in oxygen saturated de-ionized water at 30deg, 50deg and 70deg. Images were captured after each test cycle at different light intensity to reveal surface topography of the coupon surface and analyzed for number of pits and pit size using EPIX XCAP-Std software. The captured and analyzed images provided quantitative results for the steel coupons and demonstrated that the method can be applied for identifying pits on control rod surface in place of conventional visual inspection. (author)

  18. Feature recognition and detection for ancient architecture based on machine vision

    Science.gov (United States)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  19. Broiler weight estimation based on machine vision and artificial neural network.

    Science.gov (United States)

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  20. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  1. Development and evaluation of a targeted orchard sprayer using machine vision technology

    Directory of Open Access Journals (Sweden)

    H Asaei

    2016-09-01

    Full Text Available Introduction In conventional methods of spraying in orchards, the amount of pesticide sprayed, is not targeted. The pesticide consumption data indicates that the application rate of pesticide in greenhouses and orchards is more than required. Less than 30% of pesticide sprayed actually reaches nursery canopies while the rest are lost and wasted. Nowadays, variable rate spray applicators using intelligent control systems can greatly reduce pesticide use and off-target contamination of environment in nurseries and orchards. In this research a prototype orchard sprayer based on machine vision technology was developed and evaluated. This sprayer performs real-time spraying based on the tree canopy structure and its greenness extent which improves the efficiency of spraying operation in orchards. Materials and Methods The equipment used in this study comprised of three main parts generally: 1- Mechanical Equipment 2- Data collection and image processing system 3- Electronic control system Two booms were designed to support the spray nozzles and to provide flexibility in directing the spray nozzles to the target. The boom comprised two parts, the vertical part and inclined part. The vertical part of the boom was used to spray one side of the trees during forward movement of the tractor and inclined part of the boom was designed to spray the upper half of the tree canopy. Three nozzles were considered on each boom. On the vertical part of the boom, two nozzles were placed, whereas one other nozzle was mounted on the inclined part of the boom. To achieve different tree heights, the vertical part of the boom was able to slide up and down. Labview (version 2011 was used for real time image processing. Images were captured through RGB cameras mounted on a horizontal bar attached on top of the tractor to take images separately for each side of the sprayer. Images were captured from the top of the canopies looking downward. The triggering signal for

  2. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  3. The Diamond machine protection system

    International Nuclear Information System (INIS)

    Heron, M.T.; Lay, S.; Chernousko, Y.; Hamadyk, P.; Rotolo, N.

    2012-01-01

    The Diamond Light Source Machine Protection System (MPS) manages the hazards from high power photon beams and other hazards to ensure equipment protection on the booster synchrotron and storage ring. The system has a shutdown requirement, on a beam mis-steer of under 1 msec and has to manage in excess of a thousand interlocks. This is realised using a combination of bespoke hardware and programmable logic controllers. The MPS monitors a large number of interlock signals from diagnostics instrumentation, vacuum instrumentation, photon front ends and plant monitoring subsystems. Based on logic it can then remove the source of the energy to ensure protection of equipment. Depending on requirements, interlocks are managed on a Local or a Global basis. The Global system is structured as two layers, and supports fast- and slow-response-time interlock requirements. A Global MPS module takes the interlock permits for a given interlock circuit from each of the cells of the accelerator, and, subject to all interlocks being good, produces a permit to operate the source of energy: the RF amplifier for vessel protection and the PSU for magnet protection. The Local MPS module takes fast Interlock inputs from one cell of the Storage Ring or one quadrant of the Booster. Fast interlocks are those that must drop the beam in under 400 μsec (the maximum speed of the interlock) in the event of failure. EPIC provides the user interface to the MPS system

  4. Scaling up liquid state machines to predict over address events from dynamic vision sensors.

    Science.gov (United States)

    Kaiser, Jacques; Stal, Rainer; Subramoney, Anand; Roennau, Arne; Dillmann, Rüdiger

    2017-09-01

    Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  [Formula: see text]  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

  5. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  6. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  7. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  8. Machine Protection System response in 2011

    CERN Document Server

    Zerlauth, M; Wenninger, J

    2012-01-01

    The performance of the machine protection system during the 2011 run is summarized in this paper. Following an analysis of the beam dump causes in comparison to the previous 2010 run, special emphasis will be given to analyse events which risked to exposed parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems as well as in the change management will be evaluated along with their impact on the 2012 run. The role of the restricted Machine Protection Panel ( rMPP ) during the various operational phases such as commissioning, the intensity ramp up and Machine Developments is being discussed.

  9. Nondestructive Detection of the Internalquality of Apple Using X-Ray and Machine Vision

    Science.gov (United States)

    Yang, Fuzeng; Yang, Liangliang; Yang, Qing; Kang, Likui

    The internal quality of apple is impossible to be detected by eyes in the procedure of sorting, which could reduce the apple’s quality reaching market. This paper illustrates an instrument using X-ray and machine vision. The following steps were introduced to process the X-ray image in order to determine the mould core apple. Firstly, lifting wavelet transform was used to get a low frequency image and three high frequency images. Secondly, we enhanced the low frequency image through image’s histogram equalization. Then, the edge of each apple's image was detected using canny operator. Finally, a threshold was set to clarify mould core and normal apple according to the different length of the apple core’s diameter. The experimental results show that this method could on-line detect the mould core apple with less time consuming, less than 0.03 seconds per apple, and the accuracy could reach 92%.

  10. Vision system for precision alignment of coolant channels

    International Nuclear Information System (INIS)

    Kar, S.; Rao, Y.V.; Valli Kumar; Joshi, D.G.; Chadda, V.K.; Nigam, R.K.; Kayal, J.N.; Panwar, S.; Sinha, R.K.

    1997-01-01

    This paper describes a vision system which has been developed for precision alignment of Coolant Channel Replacement Machine (CCRM) with respect to the front face of the coolant channel under repair/replacement. It has provisions for automatic as well as semi-automatic alignment. A special lighting scheme has been developed for providing illumination to the front face of the channel opening. This facilitates automatic segmentation of the digitized image. The segmented image is analysed to obtain the centre of the front face of the channel opening and thus the extent of misalignment i.e. offset of the camera with respect to the front face of the channel opening. The offset information is then communicated to the PLC to generate an output signal to drive the DC servo motors for precise positioning of the co-ordinate table. 2 refs., 5 figs

  11. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  12. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  13. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  14. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision.

    Science.gov (United States)

    Ho, Chao-Ching; Wu, Dung-Sheng

    2018-03-22

    Spark-assisted chemical engraving (SACE) is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.

  15. Characteristics of the Arcing Plasma Formation Effect in Spark-Assisted Chemical Engraving of Glass, Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Chao-Ching Ho

    2018-03-01

    Full Text Available Spark-assisted chemical engraving (SACE is a non-traditional machining technology that is used to machine electrically non-conducting materials including glass, ceramics, and quartz. The processing accuracy, machining efficiency, and reproducibility are the key factors in the SACE process. In the present study, a machine vision method is applied to monitor and estimate the status of a SACE-drilled hole in quartz glass. During the machining of quartz glass, the spring-fed tool electrode was pre-pressured on the quartz glass surface to feed the electrode that was in contact with the machining surface of the quartz glass. In situ image acquisition and analysis of the SACE drilling processes were used to analyze the captured image of the state of the spark discharge at the tip and sidewall of the electrode. The results indicated an association between the accumulative size of the SACE-induced spark area and deepness of the hole. The results indicated that the evaluated depths of the SACE-machined holes were a proportional function of the accumulative spark size with a high degree of correlation. The study proposes an innovative computer vision-based method to estimate the deepness and status of SACE-drilled holes in real time.

  16. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  17. Research on the proficient machine system. Theoretical part; Jukutatsu machine system no chosa kenkyu. Rironhen

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    The basic theory of the proficient machine system to be developed was studied. Important proficient techniques in manufacturing industries are becoming extinct because of insufficient succession to next generation. The proficient machine system was proposed to cope with such situation. This machine system includes the mechanism for progress and evolution of techniques and sensibilities to be adaptable to environmental changes by learning and recognizing various motions such as work and process. Consequently, the basic research fields are composed of thought, learning, perception and action. This machine requires not only deigned fixed functions but also introduction of the same proficient concept as human being to be adaptable to changes in situation, purpose, time and machine`s complexity. This report explains in detail the basic concept, system principle, approaching procedure and practical elemental technologies of the proficient machine system, and also describes the future prospect. 133 refs., 110 figs., 7 tabs.

  18. Unique sensor fusion system for coordinate-measuring machine tasks

    Science.gov (United States)

    Nashman, Marilyn; Yoshimi, Billibon; Hong, Tsai Hong; Rippey, William G.; Herman, Martin

    1997-09-01

    This paper describes a real-time hierarchical system that fuses data from vision and touch sensors to improve the performance of a coordinate measuring machine (CMM) used for dimensional inspection tasks. The system consists of sensory processing, world modeling, and task decomposition modules. It uses the strengths of each sensor -- the precision of the CMM scales and the analog touch probe and the global information provided by the low resolution camera -- to improve the speed and flexibility of the inspection task. In the experiment described, the vision module performs all computations in image coordinate space. The part's boundaries are extracted during an initialization process and then the probe's position is continuously updated as it scans and measures the part surface. The system fuses the estimated probe velocity and distance to the part boundary in image coordinates with the estimated velocity and probe position provided by the CMM controller. The fused information provides feedback to the monitor controller as it guides the touch probe to scan the part. We also discuss integrating information from the vision system and the probe to autonomously collect data for 2-D to 3-D calibration, and work to register computer aided design (CAD) models with images of parts in the workplace.

  19. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  20. 1st International Conference on Machine Learning for Cyber Physical Systems and Industry 4.0

    CERN Document Server

    Beyerer, Jürgen

    2016-01-01

    The work presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains some selected papers from the international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Lemgo, October 1-2, 2015. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments.

  1. Hi-Vision telecine system using pickup tube

    Science.gov (United States)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  2. Smile (System/Machine-Independent Local Environment)

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, J.G.

    1988-04-01

    This document defines the characteristics of Smile, a System/machine-independent local environment. This environment consists primarily of a number of primitives (types, macros, procedure calls, and variables) that a program may use; these primitives provide facilities, such as memory allocation, timing, tasking and synchronization beyond those typically provided by a programming language. The intent is that a program will be portable from system to system and from machine to machine if it relies only on the portable aspects of its programming language and on the Smile primitives. For this to be so, Smile itself must be implemented on each system and machine, most likely using non-portable constructions; that is, while the environment provided by Smile is intended to be portable, the implementation of Smile is not necessarily so. In order to make the implementation of Smile as easy as possible and thereby expedite the porting of programs to a new system or a new machine, Smile has been defined to provide a minimal portable environment; that is, simple primitives are defined, out of which more complex facilities may be constructed using portable procedures. The implementation of Smile can be as any of the following: the underlying software environment for the operating system of an otherwise {open_quotes}bare{close_quotes} machine, a {open_quotes}guest{close_quotes} system environment built upon a preexisting operating system, an environment within a {open_quotes}user{close_quotes} process run by an operating system, or a single environment for an entire machine, encompassing both system and {open_quotes}user{close_quotes} processes. In the first three of these cases the tasks provided by Smile are {open_quotes}lightweight processes{close_quotes} multiplexed within preexisting processes or the system, while in the last case they also include the system processes themselves.

  3. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  4. Collaborative Systems – Finite State Machines

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2011-01-01

    Full Text Available In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.

  5. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  6. Building machine learning systems with Python

    CERN Document Server

    Richert, Willi

    2013-01-01

    This is a tutorial-driven and practical, but well-grounded book showcasing good Machine Learning practices. There will be an emphasis on using existing technologies instead of showing how to write your own implementations of algorithms. This book is a scenario-based, example-driven tutorial. By the end of the book you will have learnt critical aspects of Machine Learning Python projects and experienced the power of ML-based systems by actually working on them.This book primarily targets Python developers who want to learn about and build Machine Learning into their projects, or who want to pro

  7. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    Energy Technology Data Exchange (ETDEWEB)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B [Institute of Advanced Technology, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Marhaban, Mohammad Hamiruce [Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Mansor, Shattri B, E-mail: sahragard@yahoo.com [Department of Civil Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia)

    2011-02-15

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  8. Yield Estimation of Sugar Beet Based on Plant Canopy Using Machine Vision Methods

    Directory of Open Access Journals (Sweden)

    S Latifaltojar

    2014-09-01

    Full Text Available Crop yield estimation is one of the most important parameters for information and resources management in precision agriculture. This information is employed for optimizing the field inputs for successive cultivations. In the present study, the feasibility of sugar beet yield estimation by means of machine vision was studied. For the field experiments stripped images were taken during the growth season with one month intervals. The image of horizontal view of plants canopy was prepared at the end of each month. At the end of growth season, beet roots were harvested and the correlation between the sugar beet canopy in each month of growth period and corresponding weight of the roots were investigated. Results showed that there was a strong correlation between the beet yield and green surface area of autumn cultivated sugar beets. The highest coefficient of determination was 0.85 at three months before harvest. In order to assess the accuracy of the final model, the second year of study was performed with the same methodology. The results depicted a strong relationship between the actual and estimated beet weights with R2=0.94. The model estimated beet yield with about 9 percent relative error. It is concluded that this method has appropriate potential for estimation of sugar beet yield based on band imaging prior to harvest

  9. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    Science.gov (United States)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.

    2011-02-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  10. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    International Nuclear Information System (INIS)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B; Marhaban, Mohammad Hamiruce; Mansor, Shattri B

    2011-01-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  11. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  12. Multisource Data Fusion Framework for Land Use/Land Cover Classification Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Salman Qadri

    2017-01-01

    Full Text Available Data fusion is a powerful tool for the merging of multiple sources of information to produce a better output as compared to individual source. This study describes the data fusion of five land use/cover types, that is, bare land, fertile cultivated land, desert rangeland, green pasture, and Sutlej basin river land derived from remote sensing. A novel framework for multispectral and texture feature based data fusion is designed to identify the land use/land cover data types correctly. Multispectral data is obtained using a multispectral radiometer, while digital camera is used for image dataset. It has been observed that each image contained 229 texture features, while 30 optimized texture features data for each image has been obtained by joining together three features selection techniques, that is, Fisher, Probability of Error plus Average Correlation, and Mutual Information. This 30-optimized-texture-feature dataset is merged with five-spectral-feature dataset to build the fused dataset. A comparison is performed among texture, multispectral, and fused dataset using machine vision classifiers. It has been observed that fused dataset outperformed individually both datasets. The overall accuracy acquired using multilayer perceptron for texture data, multispectral data, and fused data was 96.67%, 97.60%, and 99.60%, respectively.

  13. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    Science.gov (United States)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  14. Intensity measurement of automotive headlamps using a photometric vision system

    Science.gov (United States)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  15. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  16. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  17. High slot utilization systems for electric machines

    Science.gov (United States)

    Hsu, John S

    2009-06-23

    Two new High Slot Utilization (HSU) Systems for electric machines enable the use of form wound coils that have the highest fill factor and the best use of magnetic materials. The epoxy/resin/curing treatment ensures the mechanical strength of the assembly of teeth, core, and coils. In addition, the first HSU system allows the coil layers to be moved inside the slots for the assembly purpose. The second system uses the slided-in teeth instead of the plugged-in teeth. The power density of the electric machine that uses either system can reach its highest limit.

  18. Superconducting Coil Winding Machine Control System

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J. M. [Fermilab; Kotelnikov, S. [Fermilab; Makulski, A. [Fermilab; Walbridge, D. [Fermilab; Trombly-Freytag, K. [Fermilab

    2016-10-05

    The Spirex coil winding machine is used at Fermilab to build coils for superconducting magnets. Recently this ma-chine was equipped with a new control system, which al-lows operation from both a computer and a portable remote control unit. This control system is distributed between three layers, implemented on a PC, real-time target, and FPGA, providing respectively HMI, operational logic and direct controls. The system controls motion of all mechan-ical components and regulates the cable tension. Safety is ensured by a failsafe, redundant system.

  19. Future Smart Cooking Machine System Design

    Directory of Open Access Journals (Sweden)

    Dewi Agushinta R.

    2013-11-01

    Full Text Available There are many tools make human task get easier. Cooking has become a basic necessity for human beings, since food is one of basic human needs. Until now, the cooking equipment being used is still a hand tool. However everyone has slightly high activity. The presence of cooking tools that can do the cooking work by itself is now necessary. Future Smart Cooking Machine is an artificial intelligence machine that can do cooking work automatically. With this system design, the time is minimized and the ease of work is expected to be achieved. The development of this system is carried out with System Development Life Cycle (SDLC methods. Prototyping method used in this system is a throw-away prototyping approach. At the end of this research there will be produced a cooking machine system design including physical design engine and interface design.

  20. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    Energy Technology Data Exchange (ETDEWEB)

    Brunton, Steven [Univ. of Washington, Seattle, WA (United States)

    2018-01-15

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robust principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.

  1. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  2. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  3. Vision system for dial gage torque wrench calibration

    Science.gov (United States)

    Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.

    1993-11-01

    In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.

  4. Machine learning paradigms applications in recommender systems

    CERN Document Server

    Lampropoulos, Aristomenis S

    2015-01-01

    This timely book presents Applications in Recommender Systems which are making recommendations using machine learning algorithms trained via examples of content the user likes or dislikes. Recommender systems built on the assumption of availability of both positive and negative examples do not perform well when negative examples are rare. It is exactly this problem that the authors address in the monograph at hand. Specifically, the books approach is based on one-class classification methodologies that have been appearing in recent machine learning research. The blending of recommender systems and one-class classification provides a new very fertile field for research, innovation and development with potential applications in “big data” as well as “sparse data” problems. The book will be useful to researchers, practitioners and graduate students dealing with problems of extensive and complex data. It is intended for both the expert/researcher in the fields of Pattern Recognition, Machine Learning and ...

  5. Hydraulic Modular Dosaging Systems for Machine Drives

    Directory of Open Access Journals (Sweden)

    A. J. Kotlobai

    2005-01-01

    Full Text Available The justified principle of making modular dosaging systems for positive-displacement multimotor hydraulic drives used in running gear and technological equipment of mobile construction, road and agricultural machines makes it possible to synchronize motion of running parts. The examples of the realization of modular dosaging systems and an algorithm of their operation are given in the paper.

  6. Visions of sustainable urban energy systems. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Pietzsch, Ursula [HFT Stuttgart (Germany). zafh.net - Centre of Applied Research - Sustainable Energy Technology; Mikosch, Milena [Steinbeis-Zentrum, Stuttgart (Germany). Europaeischer Technologietransfer; Liesner, Lisa (eds.)

    2010-09-15

    Within the polycity final conference from 15th to 17th September, 2010, in Stuttgart (Federal Republic of Germany) the following lectures were held: (1) Visions of sustainable urban energy system (Ursula Eicker); (2) Words of welcome (Tanja Goenner); (3) Zero-energy Europe - We are on our way (Jean-Marie Bemtgen); (4) Polycity - Energy networks in sustainable cities An introduction (Ursula Pietzsch); (5) Energy efficient city - Successful examples in the European concerto initiative (Brigitte Bach); (6) Sustainable building and urban concepts in the Catalonian polycity project contributions to the polycity final conference 2010 (Nuria Pedrals); (7) Energy efficient buildings and renewable supply within the German polycity project (Ursula Eicker); (8) Energy efficient buildings and cities in the US (Thomas Spiegehalter); (9) Energy efficient communities - First results from an IEA collaboration project (Reinhard Jank); (10) The European energy performance of buildings directive (EPBD) - Lessons learned (Eduardo Maldonado); (11) Passive house standard in Europe - State-of-the-art and challenges (Wolfgang Feist); (12) High efficiency non-residential buildings: Concepts, implementations and experiences from the UK (Levin Lomas); (13) This is how we can save our world (Franz Alt); (14) Green buildings and renewable heating and cooling concepts in China (Yanjun Dai); (15) Sustainable urban energy solutions for Asia (Brahmanand Mohanty); (16) Description of ''Parc de l'Alba'' polygeneration system: A large-scale trigeneration system with district heating within the Spanish polycity project (Francesc Figueras Bellot); (17) Improved building automation and control systems with hardware-in-the loop solutions (Martin Becker); (18) The Italian polycity project area: Arquata (Luigi Fazari); (19) Photovoltaic system integration: In rehabilitated urban structures: Experiences and performance results from the Italian polycity project in Turin (Franco

  7. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  8. VVER NPPs fuel handling machine control system

    International Nuclear Information System (INIS)

    Mini, G.; Rossi, G.; Barabino, M.; Casalini, M.

    2002-01-01

    In order to increase the safety level of the fuel handling machine on WWER NPPs, Ansaldo Nucleare was asked to design and supply a new Control System. Two Fuel Handling Machine (FHM) Control System units have been already supplied for Temelin NPP and others supply are in process for the Atommash company, which has in charge the supply of FHMs for NPPs located in Russia, Ukraine and China.The computer-based system takes into account all the operational safety interlocks so that it is able to avoid incorrect and dangerous manoeuvres in the case of operator error. Control system design criteria, hardware and software architecture, and quality assurance control, are in accordance with the most recent international requirements and standards, and in particular for electromagnetic disturbance immunity demands and seismic compatibility. The hardware architecture of the control system is based on ABB INFI 90 system. The microprocessor-based ABB INFI 90 system incorporates and improves upon many of the time proven control capabilities of Bailey Network 90, validated over 14,000 installations world-wide.The control system complies all the former designed sensors and devices of the machine and markedly the angular position measurement sensors named 'selsyn' of Russian design. Nevertheless it is fully compatible with all the most recent sensors and devices currently available on the market (for ex. Multiturn absolute encoders).All control logic were developed using standard INFI 90 Engineering Work Station, interconnecting blocks extracted from an extensive SAMA library by using a graphical approach (CAD) and allowing and easier intelligibility, more flexibility and updated and coherent documentation. The data acquisition system and the Man Machine Interface are implemented by ABB in co-operation with Ansaldo. The flexible and powerful software structure of 1090 Work-stations (APMS - Advanced Plant Monitoring System, or Tenore NT) has been successfully used to interface the

  9. ARM-based visual processing system for prosthetic vision.

    Science.gov (United States)

    Matteucci, Paul B; Byrnes-Preston, Philip; Chen, Spencer C; Lovell, Nigel H; Suaning, Gregg J

    2011-01-01

    A growing number of prosthetic devices have been shown to provide visual perception to the profoundly blind through electrical neural stimulation. These first-generation devices offer promising outcomes to those affected by degenerative disorders such as retinitis pigmentosa. Although prosthetic approaches vary in their placement of the stimulating array (visual cortex, optic-nerve, epi-retinal surface, sub-retinal surface, supra-choroidal space, etc.), most of the solutions incorporate an externally-worn device to acquire and process video to provide the implant with instructions on how to deliver electrical stimulation to the patient, in order to elicit phosphenized vision. With the significant increase in availability and performance of low power-consumption smart phone and personal device processors, the authors investigated the use of a commercially available ARM (Advanced RISC Machine) device as an externally-worn processing unit for a prosthetic neural stimulator for the retina. A 400 MHz Samsung S3C2440A ARM920T single-board computer was programmed to extract 98 values from a 1.3 Megapixel OV9650 CMOS camera using impulse, regional averaging and Gaussian sampling algorithms. Power consumption and speed of video processing were compared to results obtained to similar reported devices. The results show that by using code optimization, the system is capable of driving a 98 channel implantable device for the restoration of visual percepts to the blind.

  10. Airborne Use of Night Vision Systems

    Science.gov (United States)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  11. Vision system for diagnostic task | Merad | Global Journal of Pure ...

    African Journals Online (AJOL)

    Due to environment degraded conditions, direct measurements are not possible. ... Degraded conditions: vibrations, water and chip of metal projections, ... Before tooling, the vision system has to answer: “is it the right piece at the right place?

  12. Design of man-machine-communication-systems

    International Nuclear Information System (INIS)

    Zimmermann, R.

    1975-04-01

    This paper shows some fundamentals of man-machine-communication and deduces demands and recommendations for the design of communication systems. The main points are the directives for the design of optic display systems with details for visual perception and resolution, luminance and contrast, as well as discernibility and coding of displayed information. The most important rules are recommendations for acoustic information systems, control devices and for design of consoles are also given. (orig.) [de

  13. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    Science.gov (United States)

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  14. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  15. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  16. Recent developments in man-machine systems

    International Nuclear Information System (INIS)

    Johannsen, G.

    1987-01-01

    The field of man-machine systems is introduced with its subareas and a short outline of its history of 45 years. Three current lines of development in university and industrial research are emphasized. Today, the human problem solving activities are experimentally investigated and analytically described more vigorously than the control activities. Further, improved information presentations and decision support are made possible through new technologies of computer graphics and expert systems. At last, work on a general design methodology for man-machine systems is in progress. The aim is to better support human operators of dynamic technological systems as well as designers of graphics for visual display units and of dialogue styles. Thereby, safety and availability of the complete system can be increased. (orig.) [de

  17. EAST machine assembly and its measurement system

    International Nuclear Information System (INIS)

    Wu, S.T.

    2005-01-01

    The EAST (HT-7U) superconducting tokamak consists of a superconducting poloidal field magnet system, a toroidal field magnet system, a vacuum vessel and in-vessel components, thermal shields and a cryostat vessel. The main parts of the machine have been delivered to ASIPP (Institute of Plasma Physics, Chinese Academy of Sciences) successionally from 2003. For its complicated constitution and precise requirement, a reasonable assembly procedure and measurement technique should be defined carefully. Before the assembly procedure, a reference frame has been set up with reference fiducial targets on the wall of the test hall by an industrial measurement system. After the torus of TF coils is formed, a new reference frame will be set up from the position of the TF torus. The vacuum vessel with all inner parts will be installed with reference of the new reference frame. The big size and mass of components, special configuration of the superconducting machine with tight installation tolerances of the HT-7U (EAST) machine result in complicated assembly procedure. The procedure had begun with the installation of the support frame and the base of cryostat vessel last year. In this paper, the requirements of the assembly precise for some key components of the machine are described. The reference frame for the assembly and maintenance is explained. The assembly procedure is introduced

  18. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  19. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  20. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    Directory of Open Access Journals (Sweden)

    Mathew G. Pelletier

    2008-02-01

    Full Text Available One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU as an alternative to thePC’s traditional use of the central processing unit (CPU. The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC’s central processing

  1. A SYSTEMIC VISION OF BIOLOGY: OVERCOMING LINEARITY

    Directory of Open Access Journals (Sweden)

    M. Mayer

    2005-07-01

    Full Text Available Many  authors have proposed  that contextualization of reality  is necessary  to teach  Biology, empha- sizing students´ social and  economic realities.   However, contextualization means  more than  this;  it is related  to working with  different kinds of phenomena  and/or objects  which enable  the  expression of scientific concepts.  Thus,  contextualization allows the integration of different contents.  Under this perspective,  the  objectives  of this  work were to articulate different  biology concepts  in order  to de- velop a systemic vision of biology; to establish  relationships with other areas of knowledge and to make concrete the  cell molecular  structure and organization as well as their  implications  on living beings´ environment, using  contextualization.  The  methodology  adopted  in this  work  was based  on three aspects:  interdisciplinarity, contextualization and development of competences,  using energy:  its flux and transformations as a thematic axis and  an approach  which allowed the  interconnection between different situations involving  these  concepts.   The  activities developed  were:  1.   dialectic exercise, involving a movement around  micro and macroscopic aspects,  by using questions  and activities,  sup- ported  by the use of alternative material  (as springs, candles on the energy, its forms, transformations and  implications  in the  biological way (microscopic  concepts;  2, Construction of molecular  models, approaching the concepts of atom,  chemical bonds and bond energy in molecules; 3. Observations de- veloped in Manguezal¨(mangrove swamp  ecosystem (Itapissuma, PE  were used to work macroscopic concepts  (as  diversity  and  classification  of plants  and  animals,  concerning  to  energy  flow through food chains and webs. A photograph register of all activities  along the course plus texts

  2. WWER NPPs fuel handling machine control system

    International Nuclear Information System (INIS)

    Mini, G.; Rossi, G.; Barabino, M.; Casalini, M.

    2001-01-01

    In order to increase the safety level of the fuel handling machine on WWER NPPs, Ansaldo Nucleare was asked to design and supply a new Control System. Two FHM Control System units have been already supplied for Temelin NPP and others supplies are in process for the Atommash company, which has in charge the supply of FHMs for NPPs located in Russia, Ukraine and China. The Fuel Handling Machine (FHM) Control System is an integrated system capable of a complete management of nuclear fuel assemblies. The computer-based system takes into account all the operational safety interlocks so that it is able to avoid incorrect and dangerous manoeuvres in the case of operator error. Control system design criteria, hardware and software architecture, and quality assurance control, are in accordance with the most recent international requirements and standards, and in particular for electromagnetic disturbance immunity demands and seismic compatibility. The hardware architecture of the control system is based on ABB INFI 90 system. The microprocessor-based ABB INFI 90 system incorporates and improves upon many of the time proven control capabilities of Bailey Network 90, validated over 14,000 installations world-wide. The control system complies all the former designed sensors and devices of the machine and markedly the angular position measurement sensors named 'selsyn' of Russian design. Nevertheless it is fully compatible with all the most recent sensors and devices currently available on the market (for ex. Multiturn absolute encoders). All control logic components were developed using standard INFI 90 Engineering Work Station, interconnecting blocks extracted from an extensive SAMA library by using a graphical approach (CAD) and allowing an easier intelligibility, more flexibility and updated and coherent documentation. The data acquisition system and the Man Machine Interface are implemented by ABB in co-operation with Ansaldo. The flexible and powerful software structure

  3. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    Science.gov (United States)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  4. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  5. Viscoelastic machine elements elastomers and lubricants in machine systems

    CERN Document Server

    MOORE, D F

    2015-01-01

    Viscoelastic Machine Elements, which encompass elastomeric elements (rubber-like components), fluidic elements (lubricating squeeze films) and their combinations, are used for absorbing vibration, reducing friction and improving energy use. Examplesinclude pneumatic tyres, oil and lip seals, compliant bearings and races, and thin films. This book sets out to show that these elements can be incorporated in machine analysis, just as in the case of conventional elements (e.g. gears, cogs, chaindrives, bearings). This is achieved by introducing elementary theory and models, by describing new an

  6. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  7. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  8. Theory and practice in machining systems

    CERN Document Server

    Ito, Yoshimi

    2017-01-01

    This book describes machining technology from a wider perspective by considering it within the machining space. Machining technology is one of the metal removal activities that occur at the machining point within the machining space. The machining space consists of structural configuration entities, e.g., the main spindle, the turret head and attachments such the chuck and mandrel, and also the form-generating movement of the machine tool itself. The book describes fundamental topics, including the form-generating movement of the machine tool and the important roles of the attachments, before moving on to consider the supply of raw materials into the machining space, and the discharge of swarf from it, and then machining technology itself. Building on the latest research findings “Theory and Practice in Machining System” discusses current challenges in machining. Thus, with the inclusion of introductory and advanced topics, the book can be used as a guide and survey of machining technology for students an...

  9. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  10. The APS machine protection system (MPS)

    International Nuclear Information System (INIS)

    Fuja, R.; Berg, B.; Arnold, N.

    1996-01-01

    The machine protection system (MPS) that protects the APS storage ring vacuum chamber from x-ray beams, is active. There are over 650 sensors monitored and networked through the MPS system. About the same number of other process variables are monitored by the much slower EPICS control system, which also has an input to the rf abort chain. The MPS network is still growing with the beam position limits detection system coming on-line. The network configuration, along with a limited description of individual subsystems, is presented

  11. The APS machine protection system (MPS)

    Energy Technology Data Exchange (ETDEWEB)

    Fuja, R.; Berg, B.; Arnold, N. [and others

    1996-08-01

    The machine protection system (MPS) that protects the APS storage ring vacuum chamber from x-ray beams, is active. There are over 650 sensors monitored and networked through the MPS system. About the same number of other process variables are monitored by the much slower EPICS control system, which also has an input to the rf abort chain. The MPS network is still growing with the beam position limits detection system coming on-line. The network configuration, along with a limited description of individual subsystems, is presented.

  12. Positional reference system for ultraprecision machining

    Science.gov (United States)

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1980-09-12

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  13. Positional reference system for ultraprecision machining

    International Nuclear Information System (INIS)

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1982-01-01

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlledmultiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base

  14. FPGA-based multisensor real-time machine vision for banknote printing

    Science.gov (United States)

    Li, Rui; Türke, Thomas; Schaede, Johannes; Willeke, Harald; Lohweg, Volker

    2009-02-01

    Automatic sheet inspection in banknote production has been used as a standard quality control tool for more than a decade. As more and more print techniques and new security features are established, total quality in bank note printing must be guaranteed. This aspect has a direct impact on the research and development for bank note inspection systems in general in the sense of technological sustainability. It is accepted, that print defects are generated not only by printing parameter changes, but also by mechanical machine parameter changes, which will change unnoticed in production. Therefore, a new concept for a multi-sensory adaptive learning and classification model based on Fuzzy-Pattern- Classifiers for data inspection and machine conditioning is proposed. A general aim is to improve the known inspection techniques and propose an inspection methodology that can ensure a comprehensive quality control of the printed substrates processed by printing presses, especially printing presses which are designed to process substrates used in the course of the production of banknotes, security documents and others. Therefore, the research and development work in this area necessitates a change in concept for banknote inspection in general. In this paper a new generation of FPGA (Field Programmable Gate Array) based real time inspection technology is presented, which allows not only colour inspection on banknote sheets, but has also the implementation flexibility for various inspection algorithms for security features, such as window threads, embedded threads, OVDs, watermarks, screen printing etc., and multi-sensory data processing. A variety of algorithms is described in the paper, which are designed for and implemented on FPGAs. The focus is based on algorithmic approaches.

  15. Progress in computer vision.

    Science.gov (United States)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  16. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    OpenAIRE

    Ming Cheng; Le Sun; Giuseppe Buja; Lihua Song

    2015-01-01

    The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs). Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator perm...

  17. System design for the new TMX machine

    International Nuclear Information System (INIS)

    Chargin, A.K.; Calderon, M.O.; Mooney, L.J.; Vogtlin, G.E.

    1977-01-01

    The Tandem Mirror Experiment (TMX) is designed to test the physics of a new approach to Q-enhancement in open confinement systems. In the tandem mirror concept, the ends of a long solenoid are plugged electrostatically by means of ambipolar potential barriers created in two mirror machines or plugs, one at each end of the solenoid. The ambipolar potential in mirror machines develops as a consequence of the higher scattering rate of electrons and the balancing of electron and ion loss rates. The TMX experiment incorporates very few new engineering developments, but it does involve a new way of combining in an integrated system many previously developed ideas. The engineering task is to design the machine that would provide a proof-of-principle evaluation of the tandem mirror concept as rapidly as possible. The preliminary design was started in September 1976 and was completed by December 1976. It led to a cost estimate of $11 million and a scheduled construction period of 18 months

  18. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  19. Reconfigurable vision system for real-time applications

    Science.gov (United States)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  20. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  1. Operator aid system for Dhruva fueling machine

    International Nuclear Information System (INIS)

    Misra, S.M.; Ramaswamy, L.R.; Gohel, N.; Bharadwaj, G.; Ranade, M.R.; Khadilkar, M.G.

    1997-01-01

    Systems with significant software contents are replacing the old hardware logic systems. These systems not only are versatile but are easy to make changes in the program. Extensive use of such systems in critical real-time operation environment warrants not only excessive training on simulators, documentation but also fault tolerant system to bring the operation to a safe state in case of error. With new graphic user software interface and advancement in personal computer hardware design, the dynamic status of the physical environment can be shown on the visual display at near real time. These visual aids along with the software covering all the interlocks aids an operator in his professional work. This paper highlights the operator aid system for Dhruva fueling machine. (author). 6 refs., 1 fig

  2. Computer vision and machine learning for robust phenotyping in genome-wide studies.

    Science.gov (United States)

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R V Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K

    2017-03-08

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems.

  3. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  4. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  5. The autonomous vision system on TeamSat

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Riis, Troels

    1999-01-01

    The second qualification flight of Ariane 5 blasted off-the European Space Port in French Guiana on October 30, 1997, carrying on board a small technology demonstration satellite called TeamSat. Several experiments were proposed by various universities and research institutions in Europe and five...... of them were finally selected and integrated into TeamSat, namely FIPEX, VTS, YES, ODD and the Autonomous Vision System, AVS, a fully autonomous star tracker and vision system. This paper gives short overview of the TeamSat satellite; design, implementation and mission objectives. AVS is described in more...

  6. Electric machine and current source inverter drive system

    Science.gov (United States)

    Hsu, John S

    2014-06-24

    A drive system includes an electric machine and a current source inverter (CSI). This integration of an electric machine and an inverter uses the machine's field excitation coil for not only flux generation in the machine but also for the CSI inductor. This integration of the two technologies, namely the U machine motor and the CSI, opens a new chapter for the component function integration instead of the traditional integration by simply placing separate machine and inverter components in the same housing. Elimination of the CSI inductor adds to the CSI volumetric reduction of the capacitors and the elimination of PMs for the motor further improve the drive system cost, weight, and volume.

  7. Increased generalization capability of trainable COSFIRE filters with application to machine vision

    NARCIS (Netherlands)

    Azzopardi, George; Fernandez-Robles, Laura; Alegre, Enrique; Petkov, Nicolai

    2017-01-01

    The recently proposed trainable COSFIRE filters are highly effective in a wide range of computer vision applications, including object recognition, image classification, contour detection and retinal vessel segmentation. A COSFIRE filter is selective for a collection of contour parts in a certain

  8. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  9. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Advanced Electrical Machines and Machine-Based Systems for Electric and Hybrid Vehicles

    Directory of Open Access Journals (Sweden)

    Ming Cheng

    2015-09-01

    Full Text Available The paper presents a number of advanced solutions on electric machines and machine-based systems for the powertrain of electric vehicles (EVs. Two types of systems are considered, namely the drive systems designated to the EV propulsion and the power split devices utilized in the popular series-parallel hybrid electric vehicle architecture. After reviewing the main requirements for the electric drive systems, the paper illustrates advanced electric machine topologies, including a stator permanent magnet (stator-PM motor, a hybrid-excitation motor, a flux memory motor and a redundant motor structure. Then, it illustrates advanced electric drive systems, such as the magnetic-geared in-wheel drive and the integrated starter generator (ISG. Finally, three machine-based implementations of the power split devices are expounded, built up around the dual-rotor PM machine, the dual-stator PM brushless machine and the magnetic-geared dual-rotor machine. As a conclusion, the development trends in the field of electric machines and machine-based systems for EVs are summarized.

  11. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method

    Directory of Open Access Journals (Sweden)

    A Bakhshipour Ziaratgahi

    2017-05-01

    Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four

  12. Identification and location of catenary insulator in complex background based on machine vision

    Science.gov (United States)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  13. Vision based nutrient deficiency classification in maize plants using multi class support vector machines

    Science.gov (United States)

    Leena, N.; Saju, K. K.

    2018-04-01

    Nutritional deficiencies in plants are a major concern for farmers as it affects productivity and thus profit. The work aims to classify nutritional deficiencies in maize plant in a non-destructive mannerusing image processing and machine learning techniques. The colored images of the leaves are analyzed and classified with multi-class support vector machine (SVM) method. Several images of maize leaves with known deficiencies like nitrogen, phosphorous and potassium (NPK) are used to train the SVM classifier prior to the classification of test images. The results show that the method was able to classify and identify nutritional deficiencies.

  14. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  15. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  16. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4...

  17. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  18. Machine Directional Register System Modeling for Shaft-Less Drive Gravure Printing Machines

    Directory of Open Access Journals (Sweden)

    Shanhui Liu

    2013-01-01

    Full Text Available In the latest type of gravure printing machines referred to as the shaft-less drive system, each gravure printing roller is driven by an individual servo motor, and all motors are electrically synchronized. The register error is regulated by a speed difference between the adjacent printing rollers. In order to improve the control accuracy of register system, an accurate mathematical model of the register system should be investigated for the latest machines. Therefore, the mathematical model of the machine directional register (MDR system is studied for the multicolor gravure printing machines in this paper. According to the definition of the MDR error, the model is derived, and then it is validated by the numerical simulation and experiments carried out in the experimental setup of the four-color gravure printing machines. The results show that the established MDR system model is accurate and reliable.

  19. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  20. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  1. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  2. Evaluation of Hindi to Punjabi Machine Translation System

    OpenAIRE

    Goyal, Vishal; Lehal, Gurpreet Singh

    2009-01-01

    Machine Translation in India is relatively young. The earliest efforts date from the late 80s and early 90s. The success of every system is judged from its evaluation experimental results. Number of machine translation systems has been started for development but to the best of author knowledge, no high quality system has been completed which can be used in real applications. Recently, Punjabi University, Patiala, India has developed Punjabi to Hindi Machine translation system with high accur...

  3. Intelligent vision system for autonomous vehicle operations

    Science.gov (United States)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  4. Robust adaptive optics systems for vision science

    Science.gov (United States)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  5. Control System Design for Automatic Cavity Tuning Machines

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Khabiboulline, T.; Kotelnikov, S.; Makulski, A.; Nehring, R.; Nogiec, J.; Ross, M.; Schappert, W.; /Fermilab; Goessel, A.; Iversen, J.; Klinke, D.; /DESY

    2009-05-01

    A series of four automatic tuning machines for 9-cell TESLA-type cavities are being developed and fabricated in a collaborative effort among DESY, FNAL, and KEK. These machines are intended to support high-throughput cavity fabrication for construction of large SRF-based accelerator projects. Two of these machines will be delivered to cavity vendors for the tuning of XFEL cavities. The control system for these machines must support a high level of automation adequate for industrial use by non-experts operators. This paper describes the control system hardware and software design for these machines.

  6. Control System Design for Automatic Cavity Tuning Machines

    International Nuclear Information System (INIS)

    Carcagno, R.; Khabiboulline, T.; Kotelnikov, S.; Makulski, A.; Nehring, R.; Nogiec, J.; Ross, M.; Schappert, W.; Goessel, A.; Iversen, J.; Klinke, D.

    2009-01-01

    A series of four automatic tuning machines for 9-cell TESLA-type cavities are being developed and fabricated in a collaborative effort among DESY, FNAL, and KEK. These machines are intended to support high-throughput cavity fabrication for construction of large SRF-based accelerator projects. Two of these machines will be delivered to cavity vendors for the tuning of XFEL cavities. The control system for these machines must support a high level of automation adequate for industrial use by non-experts operators. This paper describes the control system hardware and software design for these machines.

  7. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    International Nuclear Information System (INIS)

    Castellini, P; Cecchini, S; Stroppa, L; Paone, N

    2015-01-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes. (paper)

  8. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  9. Integration and coordination in a cognitive vision system

    OpenAIRE

    Wrede, Sebastian; Hanheide, Marc; Wachsmuth, Sven; Sagerer, Gerhard

    2006-01-01

    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information tha...

  10. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  11. The Systemic Vision of the Educational Learning

    Science.gov (United States)

    Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas

    2012-01-01

    As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…

  12. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  13. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  14. Low Cost Night Vision System for Intruder Detection

    Science.gov (United States)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  15. A vision fusion treatment system based on ATtiny26L

    Science.gov (United States)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  16. Computer Vision for Artificially Intelligent Robotic Systems

    Science.gov (United States)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main

  17. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  18. The methodology of man-machine systems

    International Nuclear Information System (INIS)

    Hollnagel, E.

    1981-10-01

    This paper provides an elementary discussion of the problems of verification and validation in the context of the empirical evaluation of designs for man-machine systems. After a definition of the basic terms, a breakdown of the major parts of the process of evaluation is given, with the purpose of indicating where problems may occur. This is followed by a discussion of verification and validation, as two distinct concepts. Finally, some of the practical problems of ascertaining validity are discussed. The general conclusion is that rather than rely blindly on a well-established procedure or rule, one should pay attention to the meaningfulness of the aspects which are selected for observation, and the degree of systematism of the methods of observation and analysis. A qualitative approach is thus seen as complementary to a quantitative approach, rather than antithetical to it. (author)

  19. Design of Control System for Kiwifruit Automatic Grading Machine

    Directory of Open Access Journals (Sweden)

    Xingjian Zuo

    2013-05-01

    Full Text Available The kiwifruit automatic grading machine is an important machine for postharvest processing of kiwifruit, and the control system ensures that the machine realizes intelligence. The control system for the kiwifruit automatic grading machine designed in this paper comprises a host computer and a slave microcontroller. The host computer provides a visual grading interface for the machine with a LabVIEW software, the slave microcontroller adopts an STC89C52 microcontroller as its core, and C language is used to write programs for controlling a position sensor module, push-pull type electromagnets, motor driving modules and a power supply for controlling the operation of the machine as well as the rise or descend of grading baffle plates. The ideal control effect is obtained through test, and the intelligent operation of the machine is realized.

  20. A vision system for a Mars rover

    Science.gov (United States)

    Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.

    1988-01-01

    A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.

  1. A novel vision-based mold monitoring system in an environment of intense vibration

    International Nuclear Information System (INIS)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-01-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration. (paper)

  2. A novel vision-based mold monitoring system in an environment of intense vibration

    Science.gov (United States)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-10-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration.

  3. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  4. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  5. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  6. Sensory systems II senses other than vision

    CERN Document Server

    Wolfe, Jeremy M

    1988-01-01

    This series of books, "Readings from the Encyclopedia of Neuroscience." consists of collections of subject-clustered articles taken from the Encyclopedia of Neuroscience. The Encyclopedia of Neuroscience is a reference source and compendium of more than 700 articles written by world authorities and covering all of neuroscience. We define neuroscience broadly as including all those fields that have as a primary goal the under­ standing of how the brain and nervous system work to mediate/control behavior, including the mental behavior of humans. Those interested in specific aspects of the neurosciences, particular subject areas or specialties, can of course browse through the alphabetically arranged articles of the En­ cyclopedia or use its index to find the topics they wish to read. However. for those readers-students, specialists, or others-who will find it useful to have collections of subject-clustered articles from the Encyclopedia, we issue this series of "Readings" in paperback. Students in neuroscienc...

  7. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  8. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  9. International Border Management Systems (IBMS) Program : visions and strategies.

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  10. Machine protection system algorithm compiler and simulator

    International Nuclear Information System (INIS)

    White, G.R.; Sherwin, G.

    1993-01-01

    The Machine Protection System (MPS) component of the SLC's beam selection system, in which integrated current is continuously monitored and limited to safe levels through careful selection and feedback of the beam repetition rate, is described elsewhere in these proceedings. The novel decision making mechanism by which that system can evaluate open-quotes safe levelsclose quotes, and choose an appropriate repetition rate in real-time, is described here. The algorithm that this mechanism uses to make its decision is written in test files and expressed in states of the accelerator and its devices, one file per accelerator region. Before being used, a file is open-quotes compiledclose quotes to a binary format which can be easily processed as a forward-chaining decision tree. It is processed by distributed microcomputers local to the accelerator regions. A parent algorithm evaluates all results, and reports directly to the beam control microprocessor. Operators can test new algorithms, or changes they make to them, with an online graphical MPS simulator

  11. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  12. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  13. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  14. Investigation of the Machining Stability of a Milling Machine with Hybrid Guideway Systems

    Directory of Open Access Journals (Sweden)

    Jui-Pin Hung

    2016-03-01

    Full Text Available This study was aimed to investigate the machining stability of a horizontal milling machine with hybrid guideway systems by finite element method. To this purpose, we first created finite element model of the milling machine with the introduction of the contact stiffness defined at the sliding and rolling interfaces, respectively. Also, the motorized built-in spindle model was created and implemented in the whole machine model. Results of finite element simulations reveal that linear guides with different preloads greatly affect the dynamic responses and machining stability of the horizontal milling machine. The critical cutting depth predicted at the vibration mode associated with the machine tool structure is about 10 mm and 25 mm in the X and Y direction, respectively, while the cutting depth predicted at the vibration mode associated with the spindle structure is about 6.0 mm. Also, the machining stability can be increased when the preload of linear roller guides of the feeding mechanism is changed from lower to higher amount.

  15. A Ship Cargo Hold Inspection Approach Using Laser Vision Systems

    OpenAIRE

    SHEN Yang; ZHAO Ning; LIU Haiwei; MI Chao

    2013-01-01

    Our paper represents a vision system based on the laser measurement system (LMS) for bulk ship inspection. The LMS scanner with 2-axis servo system is installed on the ship loader to build the shape of the ship. Then, a group of real-time image processing algorithms are implemented to compute the shape of the cargo hold, the inclination angle of the ship and the relative position between the ship loader and the cargo hold. Based on those computed inspection data of the ship, the ship loader c...

  16. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  17. Computer vision in roadway transportation systems: a survey

    Science.gov (United States)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  18. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  19. Improving the reliability of stator insulation system in rotating machines

    International Nuclear Information System (INIS)

    Gupta, G.K.; Sedding, H.G.; Culbert, I.M.

    1997-01-01

    Reliable performance of rotating machines, especially generators and primary heat transport pump motors, is critical to the efficient operation on nuclear stations. A significant number of premature machine failures have been attributed to the stator insulation problems. Ontario Hydro has attempted to assure the long term reliability of the insulation system in critical rotating machines through proper specifications and quality assurance tests for new machines and periodic on-line and off-line diagnostic tests on machines in service. The experience gained over the last twenty years is presented in this paper. Functional specifications have been developed for the insulation system in critical rotating machines based on engineering considerations and our past experience. These specifications include insulation stress, insulation resistance and polarization index, partial discharge levels, dissipation factor and tip up, AC and DC hipot tests. Voltage endurance tests are specified for groundwall insulation system of full size production coils and bars. For machines with multi-turn coils, turn insulation strength for fast fronted surges in specified and verified through tests on all coils in the factory and on samples of finished coils in the laboratory. Periodic on-line and off-line diagnostic tests were performed to assess the condition of the stator insulation system in machines in service. Partial discharges are measured on-line using several techniques to detect any excessive degradation of the insulation system in critical machines. Novel sensors have been developed and installed in several machines to facilitate measurements of partial discharges on operating machines. Several off-line tests are performed either to confirm the problems indicated by the on-line test or to assess the insulation system in machines which cannot be easily tested on-line. Experience with these tests, including their capabilities and limitations, are presented. (author)

  20. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  1. MODELING AND INVESTIGATION OF ASYNCHRONOUS TWO-MACHINE SYSTEM MODES

    Directory of Open Access Journals (Sweden)

    V. S. Safaryan

    2014-01-01

    Full Text Available The paper considers stationary and transient processes of an asynchronous two-machine system. A mathematical model for investigation of stationary and transient modes, static characteristics and research results of dynamic process pertaining to starting-up the asynchronous two-machine system has been given in paper.

  2. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  3. VirtualSpace: A vision of a machine-learned virtual space environment

    Science.gov (United States)

    Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.

    2017-12-01

    Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.

  4. Building machine learning systems with Python

    CERN Document Server

    Coelho, Luis Pedro

    2015-01-01

    This book primarily targets Python developers who want to learn and use Python's machine learning capabilities and gain valuable insights from data to develop effective solutions for business problems.

  5. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-09-30

    ... (``ID'') of the presiding administrative law judge (``ALJ'') finding no violation of section 337 of the..., Virginia; Rasco GmbH (``Rasco'') of Germany; MVTec Software GmbH of Germany and MVTec LLC of Cambridge...

  6. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-11-22

    ... (``ID'') of the presiding administrative law judge (``ALJ''). The Commission has determined that there... MVTec LLC of Cambridge, Massachusetts (collectively, ``MVTech respondents''); Omron Corporation (``Omron...

  7. Methods and systems for micro machines

    Energy Technology Data Exchange (ETDEWEB)

    Stalford, Harold L.

    2018-03-06

    A micro machine may be in or less than the micrometer domain. The micro machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft is operable to be driven by the micro actuator. A tool is coupled to the micro shaft and is operable to perform work in response to at least motion of the micro shaft.

  8. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    Science.gov (United States)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  9. High-speed potato grading and quality inspection based on a color vision system

    Science.gov (United States)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  10. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  11. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  12. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  13. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  14. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  15. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  16. Indirect Tire Monitoring System - Machine Learning Approach

    Science.gov (United States)

    Svensson, O.; Thelin, S.; Byttner, S.; Fan, Y.

    2017-10-01

    The heavy vehicle industry has today no requirement to provide a tire pressure monitoring system by law. This has created issues surrounding unknown tire pressure and thread depth during active service. There is also no standardization for these kind of systems which means that different manufacturers and third party solutions work after their own principles and it can be hard to know what works for a given vehicle type. The objective is to create an indirect tire monitoring system that can generalize a method that detect both incorrect tire pressure and thread depth for different type of vehicles within a fleet without the need for additional physical sensors or vehicle specific parameters. The existing sensors that are connected communicate through CAN and are interpreted by the Drivec Bridge hardware that exist in the fleet. By using supervised machine learning a classifier was created for each axle where the main focus was the front axle which had the most issues. The classifier will classify the vehicles tires condition and will be implemented in Drivecs cloud service where it will receive its data. The resulting classifier is a random forest implemented in Python. The result from the front axle with a data set consisting of 9767 samples of buses with correct tire condition and 1909 samples of buses with incorrect tire condition it has an accuracy of 90.54% (0.96%). The data sets are created from 34 unique measurements from buses between January and May 2017. This classifier has been exported and is used inside a Node.js module created for Drivecs cloud service which is the result of the whole implementation. The developed solution is called Indirect Tire Monitoring System (ITMS) and is seen as a process. This process will predict bad classes in the cloud which will lead to warnings. The warnings are defined as incidents. They contain only the information needed and the bandwidth of the incidents are also controlled so incidents are created within an

  17. Improvement of human operator vibroprotection system in the utility machine

    Science.gov (United States)

    Korchagin, P. A.; Teterina, I. A.; Rahuba, L. F.

    2018-01-01

    The article is devoted to an urgent problem of improving efficiency of road-building utility machines in terms of improving human operator vibroprotection system by determining acceptable values of the rigidity coefficients and resistance coefficients of operator’s cab suspension system elements and those of operator’s seat. Negative effects of vibration result in labour productivity decrease and occupational diseases. Besides, structure vibrations have a damaging impact on the machine units and mechanisms, which leads to reducing an overall service life of the machine. Results of experimental and theoretical research of operator vibroprotection system in the road-building utility machine are presented. An algorithm for the program to calculate dynamic impacts on the operator in terms of different structural and performance parameters of the machine and considering combination of external pertrubation influences was proposed.

  18. Vision and dual IMU integrated attitude measurement system

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  19. VISION: a Versatile and Innovative SIlicOn tracking system

    CERN Document Server

    Lietti, Daniela; Vallazza, Erik

    This thesis work focuses on the study of the performance of different tracking and profilometry systems (the so-called INSULAB, INSUbria LABoratory, and VISION, Versatile and Innovative SIlicON, Telescopes) used in the last years by the NTA-HCCC, the COHERENT (COHERENT effects in crystals for the physics of accelerators), ICE-RAD (Interaction in Crystals for Emission of RADiation) and CHANEL (CHAnneling of NEgative Leptons) experiments, four collaborations of the INFN (Istituto Nazionale di Fisica Nucleare) dedicated to the research in the crystals physics field.

  20. Vision system for measuring wagon buffers’ lateral movements

    Directory of Open Access Journals (Sweden)

    Barjaktarović Marko

    2013-01-01

    Full Text Available This paper presents a vision system designed for measuring horizontal and vertical displacements of a railway wagon body. The model comprises a commercial webcam and a cooperative target of an appropriate shape. The lateral buffer movement is determined by calculating target displacement in real time by processing the camera image in a LabVIEW platform using free OpenCV library. Laboratory experiments demonstrate an accuracy which is better than ±0.5 mm within a 50 mm measuring range.

  1. System of technical vision for autonomous unmanned aerial vehicles

    Science.gov (United States)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  2. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  3. Machine Vision for Object Detection and Profiling in an Unstructured Environment

    Energy Technology Data Exchange (ETDEWEB)

    Walton, Miles Conley; Kinoshita, Robert Arthur

    2002-08-01

    The Handling and Sorting System for 55-Gallon Drums (HANDSS-55) is a DOE project to develop an automated method for retrieving items that are not acceptable at the Waste Isolation Pilot Plant (WIPP) from 55-gallon drums of low-level waste. The HANDSS-55 is a modular system that opens drums, sorts the waste, and then repackages the remaining waste in WIPP compliant barrels. The Sorting Station module relies on a non-contact measurement system to quickly provide a 3D profile of the sorting area. It then analyses the 3D profile and a color image to determine the position and orientation of an operator selected waste item. The item is then removed from the sorting area by a robotic arm. The use of both image and profile information for object determination provides a fast, effective method of finding and retrieving selected objects in the unstructured environment of the sorting module.

  4. Machine Vision for Object Detection and Profiling in an Unstructured Environment

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, R.A.; Walton, M.C.

    2002-05-23

    The Handling and Sorting System for 55-Gallon Drums (HANDSS-55) is a DOE project to develop an automated method for retrieving items that are not acceptable at the Waste Isolation Pilot Plant (WIPP) from 55-gallon drums of low-level waste. The HANDSS-55 is a modular system that opens drums, sorts the waste, and then repackages the remaining waste in WIPP compliant barrels. The Sorting Station module relies on a non-contact measurement system to quickly provide a 3D profile of the sorting area. It then analyses the 3D profile and a color image to determine the position and orientation of an operator selected waste item. The item is then removed from the sorting area by a robotic arm. The use of both image and profile information for object determination provides a fast, effective method of finding and retrieving selected objects in the unstructured environment of the sorting module.

  5. Machine Vision for Object Detection and Profiling in an Unstructured Environment

    International Nuclear Information System (INIS)

    Kinoshita, R.A.; Walton, M.C.

    2002-01-01

    The Handling and Sorting System for 55-Gallon Drums (HANDSS-55) is a DOE project to develop an automated method for retrieving items that are not acceptable at the Waste Isolation Pilot Plant (WIPP) from 55-gallon drums of low-level waste. The HANDSS-55 is a modular system that opens drums, sorts the waste, and then repackages the remaining waste in WIPP compliant barrels. The Sorting Station module relies on a non-contact measurement system to quickly provide a 3D profile of the sorting area. It then analyses the 3D profile and a color image to determine the position and orientation of an operator selected waste item. The item is then removed from the sorting area by a robotic arm. The use of both image and profile information for object determination provides a fast, effective method of finding and retrieving selected objects in the unstructured environment of the sorting module

  6. Discrete Model Reference Adaptive Control System for Automatic Profiling Machine

    Directory of Open Access Journals (Sweden)

    Peng Song

    2012-01-01

    Full Text Available Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules are proven. The results of simulation show that adaptive control system has favorable dynamic performances.

  7. Reinforcement and Systemic Machine Learning for Decision Making

    CERN Document Server

    Kulkarni, Parag

    2012-01-01

    Reinforcement and Systemic Machine Learning for Decision Making There are always difficulties in making machines that learn from experience. Complete information is not always available-or it becomes available in bits and pieces over a period of time. With respect to systemic learning, there is a need to understand the impact of decisions and actions on a system over that period of time. This book takes a holistic approach to addressing that need and presents a new paradigm-creating new learning applications and, ultimately, more intelligent machines. The first book of its kind in this new an

  8. Design and Construction of Wireless Control System for Drilling Machine

    Directory of Open Access Journals (Sweden)

    Nang Su Moan Hsam

    2015-06-01

    Full Text Available Abstract Drilling machine is used for boring holes in various materials and used in woodworking metalworking construction and do-it-yourself projects. When the machine operate for a long time the temperature increases and so we need to control the temperature of the machine and some lubrication system need to apply to reduce the temperature. Due to the improvement of technology the system can be controlled with wireless network. This control system use Window Communication Foundation WCF which is the latest service oriented technology to control all drilling machines in industries simultaneously. All drilling machines are start working when they received command from server. After the machine is running for a long time the temperature is gradually increased. This system used LM35 temperature sensor to measure the temperature. When the temperature is over the safely level that is programmed in host server the controller at the server will command to control the speed of motor and applying some lubrication system at the tip and edges of drill. The command from the server is received by the client and sends to PIC. In this control system PIC microcontroller is used as an interface between the client computer and the machine. The speed of motor is controlled with PWM and water pump system is used for lubrication. This control system is designed and simulated with 12V DC motor LM35 sensor LCD displayand relay which is to open the water container to spray water between drill and work piece. The host server choosing to control the drilling machine that are overheat by selecting the clients IP address that is connected with that machine.

  9. Adaptive Learning Systems: Beyond Teaching Machines

    Science.gov (United States)

    Kara, Nuri; Sevim, Nese

    2013-01-01

    Since 1950s, teaching machines have changed a lot. Today, we have different ideas about how people learn, what instructor should do to help students during their learning process. We have adaptive learning technologies that can create much more student oriented learning environments. The purpose of this article is to present these changes and its…

  10. Automated hardwood lumber grading utilizing a multiple sensor machine vision technology

    Science.gov (United States)

    D. Earl Kline; Chris Surak; Philip A. Araman

    2003-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical and Computer Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading...

  11. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition...

  12. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    Science.gov (United States)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  13. Automated Detection of Branch Shaking Locations for Robotic Cherry Harvesting Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suraj Amatya

    2017-10-01

    Full Text Available Automation in cherry harvesting is essential to reduce the demand for seasonal labor for cherry picking and reduce the cost of production. The mechanical shaking of tree branches is one of the widely studied and used techniques for harvesting small tree fruit crops like cherries. To automate the branch shaking operation, different methods of detecting branches and cherries in full foliage canopies of the cherry tree have been developed previously. The next step in this process is the localization of shaking positions in the detected tree branches for mechanical shaking. In this study, a method of locating shaking positions for automated cherry harvesting was developed based on branch and cherry pixel locations determined using RGB images and 3D camera images. First, branch and cherry regions were located in 2D RGB images. Depth information provided by a 3D camera was then mapped on to the RGB images using a standard stereo calibration method. The overall root mean square error in estimating the distance to desired shaking points was 0.064 m. Cherry trees trained in two different canopy architectures, Y-trellis and vertical trellis systems, were used in this study. Harvesting testing was carried out by shaking tree branches at the locations selected by the algorithm. For the Y-trellis system, the maximum fruit removal efficiency of 92.9% was achieved using up to five shaking events per branch. However, maximum fruit removal efficiency for the vertical trellis system was 86.6% with up to four shakings per branch. However, it was found that only three shakings per branch would achieve a fruit removal percentage of 92.3% and 86.4% in Y and vertical trellis systems respectively.

  14. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections.

    Science.gov (United States)

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2017-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Fast and flexible 3D object recognition solutions for machine vision applications

    Science.gov (United States)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  16. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  17. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  18. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  19. Calibration method for a vision guiding-based laser-tracking measurement system

    International Nuclear Information System (INIS)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-01-01

    Laser-tracking measurement systems (laser trackers) based on a vision-guiding device are widely used in industrial fields, and their calibration is important. As conventional methods typically have many disadvantages, such as difficult machining of the target and overdependence on the retroreflector, a novel calibration method is presented in this paper. The retroreflector, which is necessary in the normal calibration method, is unnecessary in our approach. As the laser beam is linear, points on the beam can be obtained with the help of a normal planar target. In this way, we can determine the function of a laser beam under the camera coordinate system, while its corresponding function under the laser-tracker coordinate system can be obtained from the encoder of the laser tracker. Clearly, when several groups of functions are confirmed, the rotation matrix can be solved from the direction vectors of the laser beams in different coordinate systems. As the intersection of the laser beams is the origin of the laser-tracker coordinate system, the translation matrix can also be determined. Our proposed method not only achieves the calibration of a single laser-tracking measurement system but also provides a reference for the calibration of a multistation system. Simulations to evaluate the effects of some critical factors were conducted. These simulations show the robustness and accuracy of our method. In real experiments, the root mean square error of the calibration result reached 1.46 mm within a range of 10 m, even though the vision-guiding device focuses on a point approximately 5 m away from the origin of its coordinate system, with a field of view of approximately 200 mm  ×  200 mm. (paper)

  20. Water spray ventilator system for continuous mining machines

    Science.gov (United States)

    Page, Steven J.; Mal, Thomas

    1995-01-01

    The invention relates to a water spray ventilator system mounted on a continuous mining machine to streamline airflow and provide effective face ventilation of both respirable dust and methane in underground coal mines. This system has two side spray nozzles mounted one on each side of the mining machine and six spray nozzles disposed on a manifold mounted to the underside of the machine boom. The six spray nozzles are angularly and laterally oriented on the manifold so as to provide non-overlapping spray patterns along the length of the cutter drum.

  1. Ground loops detection system in the RFX machine

    International Nuclear Information System (INIS)

    Bellina, F.; Pomaro, N.; Trevisan, F.

    1996-01-01

    RFX is a toroidal machine for the fusion research based on the RFP configuration. During the pulse, in any conductive loop close to the machine very strong currents can be induced, which may damage the diagnostics and the other instrumentation. To avoid loops, the earthing system of the machine is tree-shaped. However, an accidental contact between metallic earthed masses of the machine may give rise to an unwanted loop as well. An automatic system for the detection of ground loops in the earthing system has therefore been developed, which works continuously during shutdown intervals and between pulses. In the paper the design of the detection system is presented, together with the experimental results on prototypes. 4 refs., 3 figs., 1 tab

  2. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  3. Practical implementation of machine tool metrology and maintenance management systems

    International Nuclear Information System (INIS)

    Perkins, C; Longstaff, A P; Fletcher, S; Willoughby, P

    2012-01-01

    Maximising asset utilisation and minimising downtime and waste are becoming increasingly important to all manufacturing facilities as competition increases and profits decrease. The tools to assist with monitoring these machining processes are becoming more and more in demand. A system designed to fulfil the needs of machine tool operators and supervisors has been developed and its impact on the precision manufacturing industry is being considered. The benefits of implementing this system, compared to traditional methods, will be discussed here.

  4. Creating photorealistic virtual model with polarization-based vision system

    Science.gov (United States)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  5. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    Science.gov (United States)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    . These initial results show promising performance in segmenting images, including multi-class scenes with complex boundaries. In particular, the system was able to learn to distinguish between successive layers of volcanic deposits, including massive basalts overlaying lahar materials. It was also able to separate clasts from ground mass in outcrops of impact breccia, and to find veins of hydrated material within a clay-bearing host rock. The tests also reveal initial details about the types of visual information relevant to segmentation of these types of scenes, providing guidance for further development of the technique. Funding for this work was provided in part by the Canadian Astrobiology Training Program. A portion of this research was performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2013 The University of Western Ontario. All Rights Reserved.

  6. The evolution and practical application of machine translation system (1)

    Science.gov (United States)

    Tominaga, Isao; Sato, Masayuki

    This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.

  7. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  8. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Technique for Increasing Accuracy of Positioning System of Machine Tools

    Directory of Open Access Journals (Sweden)

    Sh. Ji

    2014-01-01

    Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.

  10. Status Checking System of Home Appliances using machine learning

    Directory of Open Access Journals (Sweden)

    Yoon Chi-Yurl

    2017-01-01

    Full Text Available This paper describes status checking system of home appliances based on machine learning, which can be applied to existing household appliances without networking function. Designed status checking system consists of sensor modules, a wireless communication module, cloud server, android application and a machine learning algorithm. The developed system applied to washing machine analyses and judges the four-kinds of appliance’s status such as staying, washing, rinsing and spin-drying. The measurements of sensor and transmission of sensing data are operated on an Arduino board and the data are transmitted to cloud server in real time. The collected data are parsed by an Android application and injected into the machine learning algorithm for learning the status of the appliances. The machine learning algorithm compares the stored learning data with collected real-time data from the appliances. Our results are expected to contribute as a base technology to design an automatic control system based on machine learning technology for household appliances in real-time.

  11. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  12. THE PHENOMENON OF EUROPEAN MUSICAL ROMANTICISM IN SYSTEMIC RESEARCH VISION

    Directory of Open Access Journals (Sweden)

    FLOREA AUGUSTINA

    2015-09-01

    Full Text Available The Romanticism – European cultural-artistic phenomenon of the 20th century, developed in various fields of philosophy, literature, arts, and in terms of its amplitude and universality marked the respective century as a Romantic Era – is promoted in the most pointed manner in musical art. The Research of musical Romanticism – in the conceptual, aesthetic, musical aspect – can be achieved only on the basis of a systemic vision, which inputs the necessity of a study of synthesis. The respective study will integrate in a single process the investigation of all the above – mentioned aspects and will take place at the intersection of different scientific domains: aesthetics and musical aesthetics, historical and theoretical musicology, history and theory of interpretative art.

  13. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  14. Man-machine interfaces analysis system based on computer simulation

    International Nuclear Information System (INIS)

    Chen Xiaoming; Gao Zuying; Zhou Zhiwei; Zhao Bingquan

    2004-01-01

    The paper depicts a software assessment system, Dynamic Interaction Analysis Support (DIAS), based on computer simulation technology for man-machine interfaces (MMI) of a control room. It employs a computer to simulate the operation procedures of operations on man-machine interfaces in a control room, provides quantified assessment, and at the same time carries out analysis on operational error rate of operators by means of techniques for human error rate prediction. The problems of placing man-machine interfaces in a control room and of arranging instruments can be detected from simulation results. DIAS system can provide good technical supports to the design and improvement of man-machine interfaces of the main control room of a nuclear power plant

  15. 2020 Vision for Tank Waste Cleanup (One System Integration) - 12506

    Energy Technology Data Exchange (ETDEWEB)

    Harp, Benton; Charboneau, Stacy; Olds, Erik [US DOE (United States)

    2012-07-01

    The mission of the Department of Energy's Office of River Protection (ORP) is to safely retrieve and treat the 56 million gallons of Hanford's tank waste and close the Tank Farms to protect the Columbia River. The millions of gallons of waste are a by-product of decades of plutonium production. After irradiated fuel rods were taken from the nuclear reactors to the processing facilities at Hanford they were exposed to a series of chemicals designed to dissolve away the rod, which enabled workers to retrieve the plutonium. Once those chemicals were exposed to the fuel rods they became radioactive and extremely hot. They also couldn't be used in this process more than once. Because the chemicals are caustic and extremely hazardous to humans and the environment, underground storage tanks were built to hold these chemicals until a more permanent solution could be found. The Cleanup of Hanford's 56 million gallons of radioactive and chemical waste stored in 177 large underground tanks represents the Department's largest and most complex environmental remediation project. Sixty percent by volume of the nation's high-level radioactive waste is stored in the underground tanks grouped into 18 'tank farms' on Hanford's central plateau. Hanford's mission to safely remove, treat and dispose of this waste includes the construction of a first-of-its-kind Waste Treatment Plant (WTP), ongoing retrieval of waste from single-shell tanks, and building or upgrading the waste feed delivery infrastructure that will deliver the waste to and support operations of the WTP beginning in 2019. Our discussion of the 2020 Vision for Hanford tank waste cleanup will address the significant progress made to date and ongoing activities to manage the operations of the tank farms and WTP as a single system capable of retrieving, delivering, treating and disposing Hanford's tank waste. The initiation of hot operations and subsequent full operations

  16. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  17. Expert System Architecture for Rocket Engine Numerical Simulators: A Vision

    Science.gov (United States)

    Mitra, D.; Babu, U.; Earla, A. K.; Hemminger, Joseph A.

    1998-01-01

    Simulation of any complex physical system like rocket engines involves modeling the behavior of their different components using mostly numerical equations. Typically a simulation package would contain a set of subroutines for these modeling purposes and some other ones for supporting jobs. A user would create an input file configuring a system (part or whole of a rocket engine to be simulated) in appropriate format understandable by the package and run it to create an executable module corresponding to the simulated system. This module would then be run on a given set of input parameters in another file. Simulation jobs are mostly done for performance measurements of a designed system, but could be utilized for failure analysis or a design job such as inverse problems. In order to use any such package the user needs to understand and learn a lot about the software architecture of the package, apart from being knowledgeable in the target domain. We are currently involved in a project in designing an intelligent executive module for the rocket engine simulation packages, which would free any user from this burden of acquiring knowledge on a particular software system. The extended abstract presented here will describe the vision, methodology and the problems encountered in the project. We are employing object-oriented technology in designing the executive module. The problem is connected to the areas like the reverse engineering of any simulation software, and the intelligent systems for simulation.

  18. Iris recognition and what is next? Iris diagnosis: a new challenging topic for machine vision from image acquisition to image interpretation

    Science.gov (United States)

    Perner, Petra

    2017-03-01

    Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.

  19. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    Science.gov (United States)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  20. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    Science.gov (United States)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  1. Prediction of Banking Systemic Risk Based on Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Shouwei Li

    2013-01-01

    Full Text Available Banking systemic risk is a complex nonlinear phenomenon and has shed light on the importance of safeguarding financial stability by recent financial crisis. According to the complex nonlinear characteristics of banking systemic risk, in this paper we apply support vector machine (SVM to the prediction of banking systemic risk in an attempt to suggest a new model with better explanatory power and stability. We conduct a case study of an SVM-based prediction model for Chinese banking systemic risk and find the experiment results showing that support vector machine is an efficient method in such case.

  2. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  3. A future vision of nuclear material information systems

    International Nuclear Information System (INIS)

    Suski, N.; Wimple, C.

    1999-01-01

    To address the current and future needs for nuclear materials management and safeguards information, Lawrence Livermore National Laboratory envisions an integrated nuclear information system that will support several functions. The vision is to link distributed information systems via a common communications infrastructure designed to address the information interdependencies between two major elements: Domestic, with information about specific nuclear materials and their properties, and International, with information pertaining to foreign nuclear materials, facility design and operations. The communication infrastructure will enable data consistency, validation and reconciliation, as well as provide a common access point and user interface for a broad range of nuclear materials information. Information may be transmitted to, from, and within the system by a variety of linkage mechanisms, including the Internet. Strict access control will be employed as well as data encryption and user authentication to provide the necessary information assurance. The system can provide a mechanism not only for data storage and retrieval, but will eventually provide the analytical tools necessary to support the U.S. government's nuclear materials management needs and non-proliferation policy goals

  4. Homopolar machine for reversible energy storage and transfer systems

    International Nuclear Information System (INIS)

    Stillwagon, R.E.

    1978-01-01

    A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermonuclear reactor is described. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals

  5. Shape understanding system machine understanding and human understanding

    CERN Document Server

    Les, Zbigniew

    2015-01-01

    This is the third book presenting selected results of research on the further development of the shape understanding system (SUS) carried out by authors in the newly founded Queen Jadwiga Research Institute of Understanding. In this book the new term Machine Understanding is introduced referring to a new area of research aiming to investigate the possibility of building machines with the ability to understand. It is presented that SUS needs to some extent mimic human understanding and for this reason machines are evaluated according to the rules applied for the evaluation of human understanding. The book shows how to formulate problems and how it can be tested if the machine is able to solve these problems.    

  6. Machine and plasma diagnostic instrumentation systems for the Tandem Mirror Experiment Upgrade

    International Nuclear Information System (INIS)

    Coutts, G.W.; Coffield, F.E.; Lang, D.D.; Hornady, R.S.

    1981-01-01

    To evaluate performance of a second generation Tandem Mirror Machine, an extensive instrumentation system is being designed and installed as part of the major device fabrication. The systems listed will be operational during the start-up phase of the TMX Upgrade machine and provide bench marks for future performance data. In addition to plasma diagnostic instrumentation, machine parameter monitoring systems will be installed prior to machine operation. Simultaneous recording of machine parameters will permit evaluation of plasma parameters sensitive to machine conditions

  7. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture

    Directory of Open Access Journals (Sweden)

    Yuanhong Zhong

    2018-05-01

    Full Text Available Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO object detection, the classification method and fine counting based on Support Vector Machines (SVM using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.

  8. A supervisor system for computer aided laser machining

    International Nuclear Information System (INIS)

    Mukherjee, J.K.

    1990-01-01

    Lasers achieve non divergent beam of short wavelength energy which can propagate through normal atmosphere with little divergence and can be focused on very fine points. The final high energy per unit area on target is highly localised and suitable for various types of machining at high speeds. The most notable factor is that this high energy spot can be located precisely using light-weight optical components. The laser-machining is very amenable to environmental conditions unlike electron beam and other techniques. Precision cutting and welding of nuclear materials in normal or non oxidising atmosphere can be done using this fairly easily. To achieve these objectives, development of a computer controlled laser machining system has been undertaken. The development project aims at building a computer aided machine with indegenous controller and medium power laser suitable for cutting, welding, and marking. This paper describes the integration of the various computer aided functions, spanning over the full range, from job-defining to final finished part-delivary, in computer aided laser machining. Various innovative features of the system that render it suitable for laser tool development as well as for special machining applications with user-friendliness have been covered. (author). 5 refs., 5 figs

  9. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  10. Interlock system of electron beam machine GJ-2

    International Nuclear Information System (INIS)

    Marnada, Nada

    1999-01-01

    As an irradiation installation facility, the electron beam machine (EBM) irradiation facility which use radionuclide as radiation source. There are three safety aspects to be considered in the facility i.e the safeties for human, machines, and samples to be irradiated. The safety aspect for human is to the radiation hazard and the safety aspect for machine and sample is to the damage as the result of operating failure. In the EBM GJ-2 (made in China) twelve interlock system parameter are installed to keep all of the safety aspects. Each interlock system consist transducer that controls a certain switch, a magnetic relay, and visible and audible interlock indicators to improve the reliability of interlock systems a method called redundancy method is applied to the systems of operation of high voltage. (author)

  11. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  12. New vision solar system mission study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  13. Computer Vision Based Smart Lane Departure Warning System for Vehicle Dynamics Control

    Directory of Open Access Journals (Sweden)

    Ambarish G. Mohapatra

    2011-09-01

    Full Text Available Collision Avoidance System solves many problems caused by traffic congestion worldwide and a synergy of new information technologies for simulation, real-time control and communications networks. The above system is characterized as an intelligent vehicle system. Traffic congestion has been increasing world-wide as a result of increased motorization, urbanization, population growth and changes in population density. Congestion reduces utilization of the transportation infrastructure and increases travel time, air pollution, fuel consumption and most importantly traffic accidents. The main objective of this work is to develop a machine vision system for lane departure detection and warning to measure the lane related parameters such as heading angle, lateral deviation, yaw rate and sideslip angle from the road scene image using standard image processing technique that can be used for automation of steering a motor vehicle. The exact position of the steering wheel can be monitored using a steering wheel sensor. This core part of this work is based on Hough transformation based edge detection technique for the detection of lane departure parameters. The prototype designed for this work has been tested in a running vehicle for the monitoring of real-time lane related parameters.

  14. Machine Protection and Interlock Systems for Circular Machines - Example for LHC

    CERN Document Server

    Schmidt, R.

    2016-01-01

    This paper introduces the protection of circular particle accelerators from accidental beam losses. Already the energy stored in the beams for accelerators such as the TEVATRON at Fermilab and Super Proton Synchrotron (SPS) at CERN could cause serious damage in case of uncontrolled beam loss. With the CERN Large Hadron Collider (LHC), the energy stored in particle beams has reached a value two orders of magnitude above previous accelerators and poses new threats with respect to hazards from the energy stored in the particle beams. A single accident damaging vital parts of the accelerator could interrupt operation for years. Protection of equipment from beam accidents is mandatory. Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. extraction of the beam towards a dedicated beam dump block o...

  15. Homopolar machine for reversible energy storage and transfer systems

    Science.gov (United States)

    Stillwagon, Roy E.

    1978-01-01

    A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine.

  16. Homopolar machine for reversible energy storage and transfer systems

    International Nuclear Information System (INIS)

    Stillwagon, R.E.

    1981-01-01

    A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine

  17. Basic researches for advancement of man-machine systems

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu

    1994-01-01

    The historical development of plant instrumentation and control system accompanying the introduction of automation is shown by the example of nuclear power plants. It is explained, and the change in the role of operators in the man-machine system is mentioned. Human errors are the serious problem in various fields, and automation resolves it. But complex systems also caused various disasters due to the relation of men and machines. The problem of human factors in high risk system automation is considered as the heightening of reliability and the reduction of burden on workers by decreasing human participation, and the increase of the risk of large accidents due to the lowering of reliability of human elements and the strengthening of the training of workers. Human model and the framework of human error analysis, the development of the system for man-machine system design and information analysis and evaluation, the significance of physiological index measurement and the perspective of the application, the analysis of the behavior of subjects in the abnormality diagnosis experiment using a plant simulator, and the development to the research on mutual adaptation interface are discussed. In this paper, the problem of human factors in system safety, that technical advancement brings about is examined, and the basic research on the advancement of man-machine systems by the author is reported. (K.I.)

  18. Evaluation of man-machine systems - methods and problems

    International Nuclear Information System (INIS)

    1985-01-01

    The symposium gives a survey of the methods of evaluation which permit as quantitive an assessment as possible of the collaboration between men and machines. This complex of problems is of great current significance in many areas of application. The systems to be evaluated are aircraft, land vehicles and watercraft as well as process control systems. (orig./GL) [de

  19. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  20. The immune system, adaptation, and machine learning

    Science.gov (United States)

    Farmer, J. Doyne; Packard, Norman H.; Perelson, Alan S.

    1986-10-01

    The immune system is capable of learning, memory, and pattern recognition. By employing genetic operators on a time scale fast enough to observe experimentally, the immune system is able to recognize novel shapes without preprogramming. Here we describe a dynamical model for the immune system that is based on the network hypothesis of Jerne, and is simple enough to simulate on a computer. This model has a strong similarity to an approach to learning and artificial intelligence introduced by Holland, called the classifier system. We demonstrate that simple versions of the classifier system can be cast as a nonlinear dynamical system, and explore the analogy between the immune and classifier systems in detail. Through this comparison we hope to gain insight into the way they perform specific tasks, and to suggest new approaches that might be of value in learning systems.

  1. High-precision micro/nano-scale machining system

    Science.gov (United States)

    Kapoor, Shiv G.; Bourne, Keith Allen; DeVor, Richard E.

    2014-08-19

    A high precision micro/nanoscale machining system. A multi-axis movement machine provides relative movement along multiple axes between a workpiece and a tool holder. A cutting tool is disposed on a flexible cantilever held by the tool holder, the tool holder being movable to provide at least two of the axes to set the angle and distance of the cutting tool relative to the workpiece. A feedback control system uses measurement of deflection of the cantilever during cutting to maintain a desired cantilever deflection and hence a desired load on the cutting tool.

  2. Variable-Speed, Robust Synchronous Reluctance Machine Drive Systems

    DEFF Research Database (Denmark)

    Wang, Dong

    The synchronous reluctance machine drive is getting more and more interests from the industrial side, since it can provide higher system energy efficiency than traditional inverter-fed induction machine drive systems with similar production cost. It is considered as a good candidate for super...... is recommended. In recent years, there is an increasing trend to replace the electrolytic capacitor in the frequency converter with film capacitor, which has a longer expected service lifetime and no explosion risk. Furthermore, it is possible to achieve a compact converter design by using film capacitor, since...

  3. [Functional state of vision system under chronic mercury intoxication].

    Science.gov (United States)

    Iablonskaia, D A; Mishchenko, T S; Lakhman, O L; Rukavishnikov, V S; Malyshev, V V

    2010-01-01

    Examination of chronic mercury intoxication patients in distant (post-contact) period revealed marked vision disorders and inhibited neuro-conductivity--inhibited neuronal structures of retina and optic nerve.

  4. Integrated human-machine intelligence in space systems

    Science.gov (United States)

    Boy, Guy A.

    1992-01-01

    The integration of human and machine intelligence in space systems is outlined with respect to the contributions of artificial intelligence. The current state-of-the-art in intelligent assistant systems (IASs) is reviewed, and the requirements of some real-world applications of the technologies are discussed. A concept of integrated human-machine intelligence is examined in the contexts of: (1) interactive systems that tolerate human errors; (2) systems for the relief of workloads; and (3) interactive systems for solving problems in abnormal situations. Key issues in the development of IASs include the compatibility of the systems with astronauts in terms of inputs/outputs, processing, real-time AI, and knowledge-based system validation. Real-world applications are suggested such as the diagnosis, planning, and control of enginnered systems.

  5. Design of electric control system for automatic vegetable bundling machine

    Science.gov (United States)

    Bao, Yan

    2017-06-01

    A design can meet the requirements of automatic bale food structure and has the advantages of simple circuit, and the volume is easy to enhance the electric control system of machine carrying bunch of dishes and low cost. The bundle of vegetable machine should meet the sensor to detect and control, in order to meet the control requirements; binding force can be adjusted by the button to achieve; strapping speed also can be adjusted, by the keys to set; sensors and mechanical line connection, convenient operation; can be directly connected with the plug, the 220V power supply can be connected to a power source; if, can work, by the transmission signal sensor, MCU to control the motor, drive and control procedures for small motor. The working principle of LED control circuit and temperature control circuit is described. The design of electric control system of automatic dish machine.

  6. Self-Adaptive Systems for Machine Intelligence

    CERN Document Server

    He, Haibo

    2011-01-01

    This book will advance the understanding and application of self-adaptive intelligent systems; therefore it will potentially benefit the long-term goal of replicating certain levels of brain-like intelligence in complex and networked engineering systems. It will provide new approaches for adaptive systems within uncertain environments. This will provide an opportunity to evaluate the strengths and weaknesses of the current state-of-the-art of knowledge, give rise to new research directions, and educate future professionals in this domain. Self-adaptive intelligent systems have wide application

  7. Combined measurement system for double shield tunnel boring machine guidance based on optical and visual methods.

    Science.gov (United States)

    Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng

    2017-10-01

    In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.

  8. System approach to machine building enterprise innovative activity management

    Directory of Open Access Journals (Sweden)

    І.V. Levytska

    2016-12-01

    Full Text Available The company, which operates in a challenging competitive environment should focus on new products and provide innovative services that enhance their innovation to maintain the company’s market position. The article deals with the peculiarities of such an activity in the company. The authors analyze the various approaches used in the management, and supply the advantages and disadvantages of each. It is determine that the most optimal approach among them is a system approach. The definition of the consepts "a system" and "a systematic approach to innovative activity management" are suggested. The article works out the system of machine building enterprise innovative activity management, the organization of machine building enterprise innovative activity; the planning of machine building enterprise innovative activity; the control in the system of machine building enterprise innovative activity management; the elements of the control subsystem. The properties, typical for the system of innovative management, are supplied. The managers, engaged in enterprise innovative activity management, must perform a number of the suggested tasks, which affect the efficiency of the enterprise as a whole. These exact tasks are performed using the systematic approach, providing the enterprise competitive operation and quick adaptation to changes in the external environment.

  9. Control system of power supply for resistance welding machine

    Directory of Open Access Journals (Sweden)

    Світлана Костянтинівна Поднебенна

    2017-06-01

    Full Text Available This article describes the existing methods of heat energy stabilizing, which are realized in thyristor power supplies for resistance welding machines. The advantages and features of thyristor power supplies have been described. A control system of power supply for resistance welding machine with stabilization of heat energy in a welding spot has been developed. Measurements are performed in primary winding of a welding transformer. Weld spot heating energy is calculated as the difference between the energy, consumed from the mains, and the energy losses in the primary and secondary circuits of the welding transformer as well as the energy losses in the transformer core. Algorithms of digital signal processing of the developed control system are described in the article. All measurements and calculations are preformed automatically in real-time. Input signals to the control system are: transformer primary voltage and current, temperature of the welding circuit. The designed control system ensures control of the welding heat energy and is not influenced by the supply voltage and impedance changes caused by insertion of the ferromagnetic mass in the welding circuit, the temperature change during the welding process. The developed control system for resistance welding machine makes it possible to improve the quality of welded joints, increase the efficiency of the resistance welding machine

  10. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  11. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  12. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  13. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  14. Trustworthy Voting: From Machine to System

    NARCIS (Netherlands)

    Paul, N.; Tanenbaum, A.S.

    2009-01-01

    The authors describe an electronic voting approach that takes a system view, incorporating a trustworthy process based on open source software, simplified procedures, and built-in redundant safeguards that prevent tampering. © 2009 IEEE.

  15. Machine Learning Control For Highly Reconfigurable High-Order Systems

    Science.gov (United States)

    2015-01-02

    calibration and applications,” Mechatronics and Embedded Systems and Applications (MESA), 2010 IEEE/ASME International Conference on, IEEE, 2010, pp. 38–43...AFRL-OSR-VA-TR-2015-0012 MACHINE LEARNING CONTROL FOR HIGHLY RECONFIGURABLE HIGH-ORDER SYSTEMS John Valasek TEXAS ENGINEERING EXPERIMENT STATION...DIMENSIONAL RECONFIGURABLE SYSTEMS FA9550-11-1-0302 Period of Performance 1 July 2011 – 29 September 2014 John Valasek Aerospace Engineering

  16. Mechatronic sensor system for robots and automated machines

    CSIR Research Space (South Africa)

    Shaik, AA

    2007-01-01

    Full Text Available machine makes a calculated estimate of where the tool-head should be. This is often achieved by monitoring sensors on axes that track linear translation and rotations of shafts or gears. For low precision applications this system is appropriate. However...

  17. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  18. Towards a Tool for Computer Supported Configuring of Machine Systems

    DEFF Research Database (Denmark)

    Hansen, Claus Thorp

    1996-01-01

    An engineering designer designing a product determines not only the product's component structure, but also a set of different structures which carry product behaviour and performance and make the product suited for its life phases. Whereas the nature of the elements of a machine system is fairly...

  19. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  20. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  1. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  2. Vision and laterality: does occlusion disclose a feedback processing advantage for the right hand system?

    Science.gov (United States)

    Buekers, M J; Helsen, W F

    2000-09-01

    The main purpose of this study was to examine whether manual asymmetries could be related to the superiority of the left hemisphere/right hand system in processing visual feedback. Subjects were tested when performing single (Experiment 1) and reciprocal (Experiment 2) aiming movements under different vision conditions (full vision, 20 ms on/180 ms off, 10/90, 40/160, 20/80, 60/120, 20/40). Although in both experiments right hand advantages were found, manual asymmetries did not interact with intermittent vision conditions. Similar patterns of results were found across vision conditions for both hands. These data do not support the visual feedback processing hypothesis of manual asymmetry. Motor performance is affected to the same extent for both hand systems when vision is degraded.

  3. Machine Shorthand, the Other Shorthand System

    Science.gov (United States)

    Bryce, Rose Ann

    1974-01-01

    A survey of high schools in St. Louis County, Cook County (Chicago), and Indianapolis, and all junior colleges in Illinois indicated a growing interest in touch shorthand with a corresponding increase in the number of schools offering this shorthand system. (Author/SC)

  4. Distributed Control System Design for Portable PC Based CNC Machine

    Directory of Open Access Journals (Sweden)

    Roni Permana Saputra

    2014-07-01

    Full Text Available The demand on automated machining has been increased and emerges improvement research to achieve many goals such as portability, low cost manufacturability, interoperability, and simplicity in machine usage. These improvements are conducted without ignoring the performance analysis and usability evaluation. This research has designed a distributed control system in purpose to control a portable CNC machine. The design consists of main processing unit, secondary processing unit, motor control, and motor driver. A preliminary simulation has been conducted for performance analysis including linear accuracy and circular accuracy. The results achieved in the simulation provide linear accuracy up to 2 μm with total cost for the whole processing unit is up to 5 million IDR.

  5. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  6. Automated reasoning in man-machine control systems

    International Nuclear Information System (INIS)

    Stratton, R.C.; Lusk, E.L.

    1983-01-01

    This paper describes a project being undertaken at Argonne National Laboratory to demonstrate the usefulness of automated reasoning techniques in the implementation of a man-machine control system being designed at the EBR-II nuclear power plant. It is shown how automated reasoning influences the choice of optimal roles for both man and machine in the system control process, both for normal and off-normal operation. In addition, the requirements imposed by such a system for a rigorously formal specification of operating states, subsystem states, and transition procedures have a useful impact on the analysis phase. The definitions and rules are discussed for a prototype system which is physically simple yet illustrates some of the complexities inherent in real systems

  7. Automatic Anthropometric System Development Using Machine Learning

    Directory of Open Access Journals (Sweden)

    Long The Nguyen

    2016-08-01

    Full Text Available The contactless automatic anthropometric system is proposed for the reconstruction of the 3D-model of the human body using the conventional smartphone. Our approach involves three main steps. The first step is the extraction of 12 anthropological features. Then we determine the most important features. Finally, we employ these features to build the 3D model of the human body and classify them according to gender and the commonly used sizes. 

  8. Beam loss monitor system for machine protection

    CERN Document Server

    Dehning, B

    2005-01-01

    Most beam loss monitoring systems are based on the detection of secondary shower particles which depose their energy in the accelerator equipment and finally also in the monitoring detector. To allow an efficient protection of the equipment, the likely loss locations have to be identified by tracking simulations or by using low intensity beams. If superconducting magnets are used for the beam guiding system, not only a damage protection is required but also quench preventions. The quench levels for high field magnets are several orders of magnitude below the damage levels. To keep the operational efficiency high under such circumstances, the calibration factor between the energy deposition in the coils and the energy deposition in the detectors has to be accurately known. To allow a reliable damage protection and quench prevention, the mean time between failures should be high. If in such failsafe system the number of monitors is numerous, the false dump probability has to be kept low to keep a high operation...

  9. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  10. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  11. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  12. Measurement of meat color using a computer vision system.

    Science.gov (United States)

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (Pmeat sample and the sample image on the monitor in order to evaluate the similarity between them (test A). Moreover, the panelists were asked to evaluate the similarity between two colors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (Pcolors the panelists found significant differences between them (Pcolor of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (Pcolor of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Using Vision Metrology System for Quality Control in Automotive Industries

    Science.gov (United States)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  14. USING VISION METROLOGY SYSTEM FOR QUALITY CONTROL IN AUTOMOTIVE INDUSTRIES

    Directory of Open Access Journals (Sweden)

    N. Mostofi

    2012-07-01

    Full Text Available The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System. With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  15. PENGEMBANGAN COMPUTER VISION SYSTEM SEDERHANA UNTUK MENENTUKAN KUALITAS TOMAT Development of a simple Computer Vision System to determine tomato quality

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2012-05-01

    Full Text Available The purpose of this research was to develop a simple computer vision system (CVS to non-destructively measure tomato quality based on its Red Gren Blue (RGB color parameter. Tomato quality parameters measured were Brix, citric acid, vitamin C, and total sugar. This system consisted of a box to place object, a webcam to capture images, a computer to process images, illumination system, and an image analysis software which was equipped with artificial neural networks technique for determining tomato quality. Network architecture was formed with 3 layers consisting of1 input layer with 3 input neurons, 1 hidden layer with 14 neurons using logsig activation function, and 5 output layers using purelin activation function by using backpropagation training algorithm. CVS developed was able to predict the quality parameters of a Brix value, vitamin C, citric acid, and total sugar. To obtain the predicted values which were equal or close to the actual values, a calibration model was required. For Brix value, the actual value obtained from the equation y = 12,16x – 26,46, with x was Brix predicted. The actual values of vitamin C, citric acid, and total sugar were obtained from y = 1,09x - 3.13, y = 7,35x – 19,44,  and  y = 1.58x – 0,18,, with x was the value of vitamin C, citric acid, and total sugar, respectively. ABSTRAK Tujuan penelitian adalah mengembangkan computer vision system (CVS sederhana untuk menentukan kualitas tomat secara non­destruktif berdasarkan parameter warna Red Green Blue (RGB. Parameter kualitas tomat yang diukur ada­ lah Brix, asam sitrat, vitamin C, dan gula total. Sistem ini terdiri peralatan utama yaitu kotak untuk meletakkan obyek, webcam untuk menangkap citra, komputer untuk mengolah data, sistem penerangan, dan perangkat lunak analisis citra yang dilengkapi dengan jaringan syaraf tiruan untuk menentukan kualitas tomat. Arsitektur jaringan dibentuk dengan3 lapisan yang terdiri dari 1 lapisan masukan dengan 3 sel

  16. Design of control system for optical fiber drawing machine driven by double motor

    Science.gov (United States)

    Yu, Yue Chen; Bo, Yu Ming; Wang, Jun

    2018-01-01

    Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.

  17. An expert system for vibration based diagnostics of rotating machines

    International Nuclear Information System (INIS)

    Korteniemi, A.

    1990-01-01

    Very often changes in the mechanical condition of the rotating machinery can be observed as changes in its vibration. This paper presents an expert system for vibration-based diagnosis of rotating machines by describing the architecture of the developed prototype system. The importance of modelling the problem solving knowledge as well as the domain knowledge is emphasized by presenting the knowledge in several levels

  18. Trust between man and machine in a teleoperation system

    International Nuclear Information System (INIS)

    Dassonville, I.; Jolly, D.; Desodt, A.M.

    1996-01-01

    The work we present deals with the trust of man in a teleoperation system. Trust is important because it is linked to stress which modifies human reliability. We are trying to quantify trust. In this paper, we'll present the theory of trust in relationships, and its extension for a man-machine system. Then, we explain the links between trust and human reliability. Then, we introduce our experimental process and the first results concerning selfconfidence

  19. A machine protection beam position monitor system

    International Nuclear Information System (INIS)

    Medvedko, E.; Smith, S.; Fisher, A.

    1998-01-01

    Loss of the stored beam in an uncontrolled manner can cause damage to the PEP-II B Factory. We describe here a device which detects large beam position excursions or unexpected beam loss and triggers the beam abort system to extract the stored beam safely. The bad-orbit abort trigger beam position monitor (BOAT BPM) generates a trigger when the beam orbit is far off the center (>20 mm), or rapid beam current loss (dI/dT) is detected. The BOAT BPM averages the input signal over one turn (136 kHz). AM demodulation is used to convert input signals at 476 MHz to baseband voltages. The detected signal goes to a filter section for suppression of the revolution frequency, then on to amplifiers, dividers, and comparators for position and current measurements and triggering. The derived current signal goes to a special filter, designed to perform dI/dT monitoring at fast, medium, and slow current loss rates. The BOAT BPM prototype test results confirm the design concepts. copyright 1998 American Institute of Physics

  20. Vision-based fall detection system for improving safety of elderly people

    KAUST Repository

    Harrou, Fouzi; Zerrouki, Nabil; Sun, Ying; Houacine, Amrane

    2017-01-01

    Recognition of human movements is very useful for several applications, such as smart rooms, interactive virtual reality systems, human detection and environment modeling. The objective of this work focuses on the detection and classification of falls based on variations in human silhouette shape, a key challenge in computer vision. Falls are a major health concern, specifically for the elderly. In this study, the detection is achieved with a multivariate exponentially weighted moving average (MEWMA) monitoring scheme, which is effective in detecting falls because it is sensitive to small changes. Unfortunately, an MEWMA statistic fails to differentiate real falls from some fall-like gestures. To remedy this limitation, a classification stage based on a support vector machine (SVM) is applied on detected sequences. To validate this methodology, two fall detection datasets have been tested: the University of Rzeszow fall detection dataset (URFD) and the fall detection dataset (FDD). The results of the MEWMA-based SVM are compared with three other classifiers: neural network (NN), naïve Bayes and K-nearest neighbor (KNN). These results show the capability of the developed strategy to distinguish fall events, suggesting that it can raise an early alert in the fall incidents.

  1. Vision-based fall detection system for improving safety of elderly people

    KAUST Repository

    Harrou, Fouzi

    2017-12-06

    Recognition of human movements is very useful for several applications, such as smart rooms, interactive virtual reality systems, human detection and environment modeling. The objective of this work focuses on the detection and classification of falls based on variations in human silhouette shape, a key challenge in computer vision. Falls are a major health concern, specifically for the elderly. In this study, the detection is achieved with a multivariate exponentially weighted moving average (MEWMA) monitoring scheme, which is effective in detecting falls because it is sensitive to small changes. Unfortunately, an MEWMA statistic fails to differentiate real falls from some fall-like gestures. To remedy this limitation, a classification stage based on a support vector machine (SVM) is applied on detected sequences. To validate this methodology, two fall detection datasets have been tested: the University of Rzeszow fall detection dataset (URFD) and the fall detection dataset (FDD). The results of the MEWMA-based SVM are compared with three other classifiers: neural network (NN), naïve Bayes and K-nearest neighbor (KNN). These results show the capability of the developed strategy to distinguish fall events, suggesting that it can raise an early alert in the fall incidents.

  2. Machine Learning-based Intelligent Formal Reasoning and Proving System

    Science.gov (United States)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  3. An Android malware detection system based on machine learning

    Science.gov (United States)

    Wen, Long; Yu, Haiyang

    2017-08-01

    The Android smartphone, with its open source character and excellent performance, has attracted many users. However, the convenience of the Android platform also has motivated the development of malware. The traditional method which detects the malware based on the signature is unable to detect unknown applications. The article proposes a machine learning-based lightweight system that is capable of identifying malware on Android devices. In this system we extract features based on the static analysis and the dynamitic analysis, then a new feature selection approach based on principle component analysis (PCA) and relief are presented in the article to decrease the dimensions of the features. After that, a model will be constructed with support vector machine (SVM) for classification. Experimental results show that our system provides an effective method in Android malware detection.

  4. Human machine interface for research reactor instrumentation and control system

    International Nuclear Information System (INIS)

    Mohd Sabri Minhat; Mohd Idris Taib; Izhar Abu Hussin; Zareen Khan Abdul Jalil Khan; Nurfarhana Ayuni Joha

    2010-01-01

    Most present design of Human Machine Interface for Research Reactor Instrumentation and Control System is modular-based, comprise of several cabinets such as Reactor Protection System, Control Console, Information Console as well as Communication Console. The safety, engineering and human factor will be concerned for the design. Redundancy and separation of signal and power supply are the main factor for safety consideration. The design of Operator Interface absolutely takes consideration of human and environmental factors. Physical parameters, experiences, trainability and long-established habit patterns are very important for user interface, instead of the Aesthetic and Operator-Interface Geometry. Physical design for New Instrumentation and Control System of RTP are proposed base on the state-of- the-art Human Machine Interface design. (author)

  5. Development of Vision System for Dimensional Measurement for Irradiated Fuel Assembly

    International Nuclear Information System (INIS)

    Shin, Jungcheol; Kwon, Yongbock; Park, Jongyoul; Woo, Sangkyun; Kim, Yonghwan; Jang, Youngki; Choi, Joonhyung; Lee, Kyuseog

    2006-01-01

    In order to develop an advanced nuclear fuel, a series of pool side examination (PSE) is performed to confirm in-pile behavior of the fuel for commercial production. For this purpose, a vision system was developed to measure for mechanical integrity, such as assembly bowing, twist and growth, of the loaded lead test assembly. Using this vision system, three(3) times of PSE were carried out at Uljin Unit 3 and Kori Unit 2 for the advanced fuels, PLUS7 TM and 16ACE7 TM , developed by KNFC. Among the main characteristics of the vision system is very simple structure and measuring principal. This feature enables the equipment installation and inspection time to reduce largely, and leads the PSE can be finished without disturbance on the fuel loading and unloading activities during utility overhaul periods. And another feature is high accuracy and repeatability achieved by this vision system

  6. Fuzzy Decision-Making Fuser (FDMF for Integrating Human-Machine Autonomous (HMA Systems with Adaptive Evidence Sources

    Directory of Open Access Journals (Sweden)

    Yu-Ting Liu

    2017-06-01

    Full Text Available A brain-computer interface (BCI creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This

  7. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    Science.gov (United States)

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion

  8. A review of warship man-machine-environment system engineering

    Directory of Open Access Journals (Sweden)

    ZHANG Yumei

    2017-03-01

    Full Text Available Warship Man-Machine-Environment System Engineering (MMESE is an integral part of the overall design, and its design principles were proposed according to safety, efficiency, comfort and pleasure. The typical characteristics of MMESE are summarized. The operating environment is extremely terrible on long voyages. High level collaboration is required due to the complex task system and large manpower demand. Owing to the dense computer interface information, the mental cognitive burden on the crew is heavy. The MMESE technology system is divided into four parts:man-machine coordinated, man-environment coordinated, the evaluation of man-machine-environment characteristics and the ergonomic simulation. Based on the MMESE development venation in this paper, the overseas and domestic research statuses are expounded. Interactive optimization can be realized according to the following aspects:researching the basic human characteristics of the crew, applying this to the warship's overall design, and formulating relevant ergonomic standards and norms. Next, Human System Integration (HSI professional engineering was introduced comprehensively into the marines in order to achieve an optimal system. On this basis, we completed the future development trend analysis. All these studies and results have some reference meaning for guiding the integrated optimization of warships as a whole, downsizing the manpower and improving efficiency.

  9. ANN Based Tool Condition Monitoring System for CNC Milling Machines

    Directory of Open Access Journals (Sweden)

    Mota-Valtierra G.C.

    2011-10-01

    Full Text Available Most of the companies have as objective to manufacture high-quality products, then by optimizing costs, reducing and controlling the variations in its production processes it is possible. Within manufacturing industries a very important issue is the tool condition monitoring, since the tool state will determine the quality of products. Besides, a good monitoring system will protect the machinery from severe damages. For determining the state of the cutting tools in a milling machine, there is a great variety of models in the industrial market, however these systems are not available to all companies because of their high costs and the requirements of modifying the machining tool in order to attach the system sensors. This paper presents an intelligent classification system which determines the status of cutt ers in a Computer Numerical Control (CNC milling machine. This tool state is mainly detected through the analysis of the cutting forces drawn from the spindle motors currents. This monitoring system does not need sensors so it is no necessary to modify the machine. The correct classification is made by advanced digital signal processing techniques. Just after acquiring a signal, a FIR digital filter is applied to the data to eliminate the undesired noisy components and to extract the embedded force components. A Wavelet Transformation is applied to the filtered signal in order to compress the data amount and to optimize the classifier structure. Then a multilayer perceptron- type neural network is responsible for carrying out the classification of the signal. Achieving a reliability of 95%, the system is capable of detecting breakage and a worn cutter.

  10. Embedded Platforms for Computer Vision-based Advanced Driver Assistance Systems: a Survey

    OpenAIRE

    Velez, Gorka; Otaegui, Oihana

    2015-01-01

    Computer Vision, either alone or combined with other technologies such as radar or Lidar, is one of the key technologies used in Advanced Driver Assistance Systems (ADAS). Its role understanding and analysing the driving scene is of great importance as it can be noted by the number of ADAS applications that use this technology. However, porting a vision algorithm to an embedded automotive system is still very challenging, as there must be a trade-off between several design requisites. Further...

  11. Framework for man-machine interface design evaluation system considering cognitive factor

    International Nuclear Information System (INIS)

    Itoh, Toru; Sasaki, Kazunori; Yoshikawa, Hidekazu; Takahashi, Makoto; Furuta, Tomihiko.

    1994-01-01

    It is necessary to improve human reliability in order to gain a higher reliability of the total plant system taking an account of development of plant automation and improvement of machine reliability. Therefore, the role of the man-machine system will come to be important. Accordingly, the evaluation of the man-machine system design information is desired in order to solve the mismatch problem between plant information presented by the man-machine system and information required by the operator comprehensively. This paper discusses required functions and software framework for the man-machine interface design evaluation system. The man-machine interface design evaluation system has features to extract the potential matters which are inherent on the design information of man-machine system by simulating the operator behavior, the plant system and the man-machine system, considering the operator's cognitive performance and time dependency. (author)

  12. FAIR principles and the IEDB: short-term improvements and a long-term vision of OBO-foundry mediated machine-actionable interoperability

    Science.gov (United States)

    Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro

    2018-01-01

    Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354

  13. Humans and machines in space: The vision, the challenge, the payoff; Proceedings of the 29th Goddard Memorial Symposium, Washington, Mar. 14, 15, 1991

    Science.gov (United States)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    The present conference discusses the currently envisioned goals of human-machine systems in spacecraft environments, prospects for human exploration of the solar system, and plausible methods for meeting human needs in space. Also discussed are the problems of human-machine interaction in long-duration space flights, remote medical systems for space exploration, the use of virtual reality for planetary exploration, the alliance between U.S. Antarctic and space programs, and the economic and educational impacts of the U.S. space program.

  14. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  15. The Abstract Machine Model for Transaction-based System Control

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  16. System Center 2012 R2 Virtual Machine Manager cookbook

    CERN Document Server

    Cardoso, Edvaldo Alessandro

    2014-01-01

    This book is a step-by-step guide packed with recipes that cover architecture design and planning. The book is also full of deployment tips, techniques, and solutions. If you are a solutions architect, technical consultant, administrator, or any other virtualization enthusiast who needs to use Microsoft System Center Virtual Machine Manager in a real-world environment, then this is the book for you. We assume that you have previous experience with Windows 2012 R2 and Hyper-V.

  17. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  18. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  19. VIRTUAL MACHINES IN EDUCATION – CNC MILLING MACHINE WITH SINUMERIK 840D CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    Ireneusz Zagórski

    2014-11-01

    Full Text Available Machining process nowadays could not be conducted without its inseparable element: cutting edge and frequently numerically controlled milling machines. Milling and lathe machining centres comprise standard equipment in many companies of the machinery industry, e.g. automotive or aircraft. It is for that reason that tertiary education should account for this rising demand. This entails the introduction into the curricula the forms which enable visualisation of machining, milling process and virtual production as well as virtual machining centres simulation. Siemens Virtual Machine (Virtual Workshop sets an example of such software, whose high functionality offers a range of learning experience, such as: learning the design of machine tools, their configuration, basic operation functions as well as basics of CNC.

  20. Beam interlock system and safe machine parameters system 2010 and beyond

    CERN Document Server

    Todd, B

    2010-01-01

    The Beam Interlock System (BIS) and Safe Machine Parameters (SMP) system are central to the protection of the Large Hadron Collider (LHC) machine. The BIS has been critical for the safe operation of LHC from the first day of operation. It has been installed and commissioned, only minor enhancements are required in order to accommodate all future LHC machine protection requirements. At reduced intensity, the SMP system is less critical for LHC operation. As such, the current system satisfies the 2010 operational requirements. Further developments are required, both at the SMP Controller level, and at the system level, in order to accommodate the requirements of the LHC beyond 2010.

  1. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  2. Acquisition system for the diagnostics data from a toroidal machine

    International Nuclear Information System (INIS)

    Moulin, B.

    1976-01-01

    The data acquisition system 'ARIANE' has been conceived by the SIG (Service d'Ionique Generale), for physical measurements on the toroidal machines PETULA and WEGA, which were designed to study the H.F. heating of pulsed plasmas. These systems are constitued of electronic modules which permit them to be adapted to different kinds of measurements, either by analogue channels or by pulse counting. The programmation of these systems, are achieved, either by multiswitches accessible manually on front panels, or by a computer which performs the numerical computations [fr

  3. Human-Machine Systems concepts applied to Control Engineering Education

    OpenAIRE

    Marangé , Pascale; Gellot , François; Riera , Bernard

    2008-01-01

    International audience; In this paper, we interest us to Human-Machine Systems (HMS) concepts applied to Education. It is shown how the HMS framework enables to propose original solution in matter of education in the field of control engineering. We focus on practical courses on control of manufacturing systems. The proposed solution is based on an original use of real and large-scale systems instead of simulation. The main idea is to enable the student, whatever his/her level to control the ...

  4. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  5. 76 FR 63238 - Proximity Detection Systems for Continuous Mining Machines in Underground Coal Mines

    Science.gov (United States)

    2011-10-12

    ... Detection Systems for Continuous Mining Machines in Underground Coal Mines AGENCY: Mine Safety and Health... Agency's proposed rule addressing Proximity Detection Systems for Continuous Mining Machines in... proposed rule for Proximity Detection Systems on Continuous Mining Machines in Underground Coal Mines. Due...

  6. 76 FR 70075 - Proximity Detection Systems for Continuous Mining Machines in Underground Coal Mines

    Science.gov (United States)

    2011-11-10

    ... Detection Systems for Continuous Mining Machines in Underground Coal Mines AGENCY: Mine Safety and Health... proposed rule addressing Proximity Detection Systems for Continuous Mining Machines in Underground Coal... Detection Systems for Continuous Mining Machines in Underground Coal Mines. MSHA conducted hearings on...

  7. Dynamic cellular manufacturing system considering machine failure and workload balance

    Science.gov (United States)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  8. Clinical quality needs complex adaptive systems and machine learning.

    Science.gov (United States)

    Marsland, Stephen; Buchan, Iain

    2004-01-01

    The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.

  9. Management system of ELHEP cluster machine for FEL photonics design

    Science.gov (United States)

    Zysik, Jacek; Poźniak, Krzysztof; Romaniuk, Ryszard

    2006-10-01

    A multipurpose, distributed MatLab calculations oriented, cluster machine was assembled in PERG/ELHEP laboratory at ISE/WUT. It is predicted mainly for advanced photonics and FPGA/DSP based systems design for Free Electron Laser. It will be used also for student projects for superconducting accelerator and FEL. Here we present one specific side of cluster design. For an intense, distributed daily work with the cluster, it is important to have a good interface and practical access to all machine resources. A complex management system was implemented in PERG laboratory. It helps all registered users to work using all necessary applications, communicate with other logged in people, check all the news and gather all necessary information about what is going on in the system, how it is utilized, etc. The system is also very practical for administrator purposes, it helps to keep controlling who is using the resources and for how long. It provides different privileges for different applications and many more. The system is introduced as a freeware, using open source code and can be modified by system operators or super-users who are interested in nonstandard system configuration.

  10. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  11. Machine learning strategies for systems with invariance properties

    Science.gov (United States)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  12. Power quality in power systems and electrical machines

    CERN Document Server

    Fuchs, Ewald

    2015-01-01

    The second edition of this must-have reference covers power quality issues in four parts, including new discussions related to renewable energy systems. The first part of the book provides background on causes, effects, standards, and measurements of power quality and harmonics. Once the basics are established the authors move on to harmonic modeling of power systems, including components and apparatus (electric machines). The final part of the book is devoted to power quality mitigation approaches and devices, and the fourth part extends the analysis to power quality solutions for renewable

  13. Formal verification of automated teller machine systems using SPIN

    Science.gov (United States)

    Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam

    2017-08-01

    Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.

  14. Machine Control System of Steady State Superconducting Tokamak-1

    Energy Technology Data Exchange (ETDEWEB)

    Masand, Harish, E-mail: harish@ipr.res.in; Kumar, Aveg; Bhandarkar, M.; Mahajan, K.; Gulati, H.; Dhongde, J.; Patel, K.; Chudasma, H.; Pradhan, S.

    2016-11-15

    Highlights: • Central Control System. • SST-1. • Machine Control System. - Abstract: Central Control System (CCS) of the Steady State Superconducting Tokamak-1 (SST-1) controls and monitors around 25 plant and experiment subsystems of SST-1 located remotely from the Central-Control room. Machine Control System (MCS) is a supervisory system that sits on the top of the CCS hierarchy and implements the CCS state diagram. MCS ensures the software interlock between the SST-1 subsystems with the CCS, any subsystem communication failure or its local error does not prohibit the execution of the MCS and in-turn the CCS operation. MCS also periodically monitors the subsystem’s status and their vital process parameters throughout the campaign. It also provides the platform for the Central Control operator to visualize and exchange remotely the operational and experimental configuration parameters with the sub-systems. MCS remains operational 24 × 7 from the commencement to the termination of the SST-1 campaign. The developed MCS has performed robustly and flawlessly during all the last campaigns of SST-1 carried out so far. This paper will describe various aspects of the development of MCS.

  15. Machine learning techniques for optical communication system optimization

    DEFF Research Database (Denmark)

    Zibar, Darko; Wass, Jesper; Thrane, Jakob

    In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction.......In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction....

  16. Rule based systems for big data a machine learning approach

    CERN Document Server

    Liu, Han; Cocea, Mihaela

    2016-01-01

    The ideas introduced in this book explore the relationships among rule based systems, machine learning and big data. Rule based systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rule based systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rule based systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rule based systems.

  17. Using Expert Systems in Evaluation of the State of High Voltage Machine Insulation Systems

    Directory of Open Access Journals (Sweden)

    K. Záliš

    2000-01-01

    Full Text Available Expert systems are used for evaluating the actual state and future behavior of insulating systems of high voltage electrical machines and equipment. Several rule-based expert systems have been developed in cooperation with top diagnostic workplaces in the Czech Republic for this purpose. The IZOLEX expert system evaluates diagnostic measurement data from commonly used offline diagnostic methods for the diagnostic of high voltage insulation of rotating machines, non-rotating machines and insulating oils. The CVEX expert system evaluates the discharge activity on high voltage electrical machines and equipment by means of an off-line measurement. The CVEXON expert system is for evaluating the discharge activity by on-line measurement, and the ALTONEX expert system is the expert system for on-line monitoring of rotating machines. These developed expert systems are also used for educating students (in bachelor, master and post-graduate studies and in courses which are organized for practicing engineers and technicians and for specialists in the electrical power engineering branch. A complex project has recently been set up to evaluate the measurement of partial discharges. Two parallel expert systems for evaluating partial dischatge activity on high voltage electrical machines will work at the same time in this complex evaluating system.

  18. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  19. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  20. Modelling and Analysis of Vibrations in a UAV Helicopter with a Vision System

    Directory of Open Access Journals (Sweden)

    G. Nicolás Marichal Plasencia

    2012-11-01

    Full Text Available The analysis of the nature and damping of unwanted vibrations on Unmanned Aerial Vehicle (UAV helicopters are important tasks when images from on-board vision systems are to be obtained. In this article, the authors model a UAV system, generate a range of vibrations originating in the main rotor and design a control methodology in order to damp these vibrations. The UAV is modelled using VehicleSim, the vibrations that appear on the fuselage are analysed to study their effects on the on-board vision system by using Simmechanics software. Following this, the authors present a control method based on an Adaptive Neuro-Fuzzy Inference System (ANFIS to achieve satisfactory damping results over the vision system on board.

  1. Reinforcement learning in computer vision

    Science.gov (United States)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  2. Tool management in manufacturing systems equipped with CNC machines

    Directory of Open Access Journals (Sweden)

    Giovanni Tani

    1997-12-01

    Full Text Available This work has been carried out for the purpose of realizing an automated system for the integrated management of tools within a company. By integrating planning, inspection and tool-room functions, automated tool management can ensure optimum utilization of tools on the selected machines, guaranteeing their effective availability. The first stage of the work consisted of defining and developing a Tool Management System whose central nucleus is a unified Data Base for all of the tools, forming part of the company's Technological Files (files on machines, materials, equipment, methods, etc., interfaceable with all of the company departments that require information on tools. The system assigns code numbers to the individual components of the tools and file them on the basis of their morphological and functional characteristics. The system is also designed to effect assemblies of tools, from which are obtained the "Tool Cards" required for compiling working cycles (CAPP, for CAM programming and for the Tool-room where the tools are physically prepared. Methods for interfacing with suitable systems for the aforesaid functions have also been devised

  3. A survey on queues in machining system: Progress from 2010 to 2017

    Directory of Open Access Journals (Sweden)

    Shekhar C.

    2017-01-01

    Full Text Available The aim of the present article is to give a historical survey of some important research works related to queues in machining system since 2010. Queues of failed machines in machine repairing problem occur due to the failure of machines at random in the manufacturing industries, where different jobs are performed on machining stations. Machines are subject to failure what may result in significant loss of production, revenue, or goodwill. In addition to the references on queues in machining system, which is also called `Machine Repair Problem' (MRP or `Machine Interference Problem' (MIP, a meticulous list of books and survey papers is also prepared so as to provide a detailed catalog for understanding the research in queueing domain. We have classified the relevant literature according to a year of publishing, methodological, and modeling aspects. The author(s hope that this survey paper could be of help to learners contemplating research on queueing domain.

  4. Comparison of drive systems for pulsed synchronous machines - an overview

    International Nuclear Information System (INIS)

    Baumgart, G.E.; Boenig, H.J.

    1986-01-01

    Magnetically confined plasma fusion experiments require large pulses of energy to be delivered into coil systems. One of the most effective methods of generating these high energy pulses is to convert stored inertial energy into electrical energy. Large synchronous generators of both the vertical and horizontal shaft type have been successfully used for this purpose. As the pulsed energy is delivered to the load, the inertial energy of the rotor of the machine is changed into electrical energy, causing the rotor to slow down. A drive system is required to accelerate the generator from standstill to the maximum operating speed and between load pulses from a reduced operating speed to the maximum speed. There are several types of drive systems that can be used for this application. An overview of six candidate drive systems is presented and comparisons of cost, performance, efficiency and line effects for these systems are described

  5. Comparison of drive systems for pulsed synchronous machines: an overview

    International Nuclear Information System (INIS)

    Baumgart, G.E.; Boenig, H.J.

    1985-01-01

    Magnetically confined plasma fusion experiments require large pulses of energy to be delivered into coil systems. One of the most effective methods of generating these high energy pulses is to convert stored inertial energy into electrical energy. Large synchronous generators of both the vertical and horizontal shaft type have been successfully used for this purpose. As the pulsed energy is delivered to the load, the inertial energy of the rotor of the machine is changed into electrical energy, causing the rotor to slow down. A drive system is required to accelerate the generator from standstill to the maximum operating speed and between load pulses from a reduced operating speed to the maximum speed. There are several types of drive systems that can be used for this application. An overview of six candidate drive systems is presented and comparisons of cost, performance, efficiency, and line effects for these systems are described

  6. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  7. Color Calibration for Colorized Vision System with Digital Sensor and LED Array Illuminator

    Directory of Open Access Journals (Sweden)

    Zhenmin Zhu

    2016-01-01

    Full Text Available Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIE  L⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.

  8. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  9. Which Management Control System principles and aspects are relevant when deploying a learning machine?

    OpenAIRE

    Martin, Johansson; Mikael, Göthager

    2017-01-01

    How shall a business adapt its management control systems when learning machines enter the arena? Will the control system continue to focus on humans aspects and continue to consider a learning machine to be an automation tool as any other historically programmed computer? Learning machines introduces productivity capabilities that achieve very high levels of efficiency and quality. A learning machine can sort through large amounts of data and make conclusions difficult by a human mind. Howev...

  10. Intellectual Control System of Processing on CNC Machines

    Science.gov (United States)

    Nekrasov, R. Y.; Lasukov, A. A.; Starikov, A. I.; Soloviev, I. V.; Bekareva, O. V.

    2016-04-01

    Scientific and technical progress makes great demands for quality of engineering production. The priority is to ensure metalworking equipment with required dimensional accuracy during the entire period of operation at minimum manufacturing costs. In article considered the problem of increasing of accuracy of processing products on CNC. The authors offers a solution to the problem by providing compensating adjustment in the trajectory of the cutting tool and machining mode. The necessity of creation of mathematical models of processes behavior in an automated technological system operations (OATS). Based on the research, authors have proposed a generalized diagram of diagnosis and input operative correction and approximate mathematical models of individual processes of diagnosis.

  11. Design foundation of vacuum system for electron beam machine

    International Nuclear Information System (INIS)

    Darsono; Suprapto; Djasiman

    1999-01-01

    Vacuum system is a main part of electron beam Machine because (EBM) the electron can not be produced without this vacuum. Vacuum system consists of vacuum pump, connecting pipe, valve, and vacuum gauge. The design vacuum system of EBM, basis knowledge and technology of vacuum is needed. The paper describes types of vacuum pump, calculation of pipe conductance and pumping time of vacuum system then there are used as consideration of criteria to choose vacuum pump for EBM. From the result of study, it is concluded that for EBM of 500 keV/10 mA which is going to use for wood coating and with consideration of economic and technic factor it is better to use diffusion pump. (author)

  12. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    Science.gov (United States)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  13. Assessment of Beer Quality Based on a Robotic Pourer, Computer Vision, and Machine Learning Algorithms Using Commercial Beers.

    Science.gov (United States)

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Howell, Kate; Dunshea, Frank R

    2018-05-01

    Sensory attributes of beer are directly linked to perceived foam-related parameters and beer color. The aim of this study was to develop an objective predictive model using machine learning modeling to assess the intensity levels of sensory descriptors in beer using the physical measurements of color and foam-related parameters. A robotic pourer (RoboBEER), was used to obtain 15 color and foam-related parameters from 22 different commercial beer samples. A sensory session using quantitative descriptive analysis (QDA ® ) with trained panelists was conducted to assess the intensity of 10 beer descriptors. Results showed that the principal component analysis explained 64% of data variability with correlations found between foam-related descriptors from sensory and RoboBEER such as the positive and significant correlation between carbon dioxide and carbonation mouthfeel (R = 0.62), correlation of viscosity to sensory, and maximum volume of foam and total lifetime of foam (R = 0.75, R = 0.77, respectively). Using the RoboBEER parameters as inputs, an artificial neural network (ANN) regression model showed high correlation (R = 0.91) to predict the intensity levels of 10 related sensory descriptors such as yeast, grains and hops aromas, hops flavor, bitter, sour and sweet tastes, viscosity, carbonation, and astringency. This paper is a novel approach for food science using machine modeling techniques that could contribute significantly to rapid screenings of food and brewage products for the food industry and the implementation of Artificial Intelligence (AI). The use of RoboBEER to assess beer quality showed to be a reliable, objective, accurate, and less time-consuming method to predict sensory descriptors compared to trained sensory panels. Hence, this method could be useful as a rapid screening procedure to evaluate beer quality at the end of the production line for industry applications. © 2018 Institute of Food Technologists®.

  14. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Bjorholm; Jensen, Kirsten

    2015-01-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance...... are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods...... accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. (C) 2014 Elsevier Ltd. All rights reserved....

  15. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  16. Social Intelligence in a Human-Machine Collaboration System

    Science.gov (United States)

    Nakajima, Hiroshi; Morishima, Yasunori; Yamada, Ryota; Brave, Scott; Maldonado, Heidy; Nass, Clifford; Kawaji, Shigeyasu

    In this information society of today, it is often argued that it is necessary to create a new way of human-machine interaction. In this paper, an agent with social response capabilities has been developed to achieve this goal. There are two kinds of information that is exchanged by two entities: objective and functional information (e.g., facts, requests, states of matters, etc.) and subjective information (e.g., feelings, sense of relationship, etc.). Traditional interactive systems have been designed to handle the former kind of information. In contrast, in this study social agents handling the latter type of information are presented. The current study focuses on sociality of the agent from the view point of Media Equation theory. This article discusses the definition, importance, and benefits of social intelligence as agent technology and argues that social intelligence has a potential to enhance the user's perception of the system, which in turn can lead to improvements of the system's performance. In order to implement social intelligence in the agent, a mind model has been developed to render affective expressions and personality of the agent. The mind model has been implemented in a human-machine collaborative learning system. One differentiating feature of the collaborative learning system is that it has an agent that performs as a co-learner with which the user interacts during the learning session. The mind model controls the social behaviors of the agent, thus making it possible for the user to have more social interactions with the agent. The experiment with the system suggested that a greater degree of learning was achieved when the students worked with the co-learner agent and that the co-learner agent with the mind model that expressed emotions resulted in a more positive attitude toward the system.

  17. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  18. Advanced man-machine system for nuclear power plants

    International Nuclear Information System (INIS)

    Masui, Takao; Naito, Norio; Kato, Kanji.

    1990-01-01

    Recent development of artificial intelligence(AI) seems to offer new possibility to strengthen the performance of the operator support system. From this point of view, a national project of Advanced Man-Machine System Development for Nuclear Power Plant (MMS-NPP) has been carried out since 1984 as 8-year project. This project aims at establishing advanced operator support functions which support operators in their knowledge-based behaviors and smoother interface with the system. This paper describes the role of MMS-NPP, the support functions and the main feature of the MMS-NPP detailed design with its focus placed on the realization methods using AI technology of the support functions for BWR and PWR plants. (author)

  19. Efficient operation of anisotropic synchronous machines for wind energy systems

    International Nuclear Information System (INIS)

    Eldeeb, Hisham; Hackl, Christoph M.; Kullick, Julian

    2016-01-01

    This paper presents an analytical solution for the Maximum-Torque-per-Ampere (MTPA) operation of synchronous machines (SM) with anisotropy and magnetic cross-coupling for the application in wind turbine systems and airborne wind energy systems. For a given reference torque, the analytical MTPA solution provides the optimal stator current references which produce the desired torque while minimizing the stator copper losses. From an implementation point of view, the proposed analytical method is appealing in terms of its fast online computation (compared to classical numerical methods) and its efficiency enhancement of the electrical drive system. The efficiency of the analytical MTPA operation, with and without consideration of cross-coupling, is compared to the conventional method with zero direct current. (paper)

  20. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey