WorldWideScience

Sample records for include computer vision

  1. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  2. Computational vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  3. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  4. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  5. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  6. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  7. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  8. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  9. An overview of computer vision

    Science.gov (United States)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  10. Continuous learning in computer vision

    NARCIS (Netherlands)

    Pintea, S.L.

    2017-01-01

    In this thesis we focus on continuous learning, and specifically on continuous learning in the context of computer vision. Computer vision aims at interpreting the world from its visual dimension, in an automatic manner. The world in general is characterized by continuity, and so is the visual world

  11. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  12. COMPUTER VISION SYNDROME: A SHORT REVIEW.

    OpenAIRE

    Sameena; Mohd Inayatullah

    2012-01-01

    Computers are probably one of the biggest scientific inventions of the modern era, and since then they have become an integral part of our life. The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS). CVS may have a significant impact not only on visual com fort but also occupational productivit...

  13. Functional programming for computer vision

    Science.gov (United States)

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  14. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    The goal of this thesis is to investigate computer vision methods for timber harvesting operations. The background for developing computer vision for timber harvesting is to document origin of timber and to collect qualitative and quantitative parameters concerning the timber for efficient harvest...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the development of the logTracker system the described methods have a general applicability making them useful for many other computer vision problems....

  15. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  16. Machine Learning for Computer Vision

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2013-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and t...

  17. Harnessing vision for computation.

    Science.gov (United States)

    Changizi, Mark

    2008-01-01

    Might it be possible to harness the visual system to carry out artificial computations, somewhat akin to how DNA has been harnessed to carry out computation? I provide the beginnings of a research programme attempting to do this. In particular, new techniques are described for building 'visual circuits' (or 'visual software') using wire, NOT, OR, and AND gates in a visual 6modality such that our visual system acts as 'visual hardware' computing the circuit, and generating a resultant perception which is the output.

  18. Computer vision for ambient intelligence

    NARCIS (Netherlands)

    Salah, A.A.; Gevers, T.; Sebe, N.; Vinciarelli, A.

    2011-01-01

    A natural way of conceptualizing ambient intelligence is by picturing an active environment with access to perceptual input, not via eyes and ears, but by their technological counterparts. Computer vision is an essential part of building context-aware environments that adapt and anticipate their

  19. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations

  20. Computer vision in microstructural analysis

    Science.gov (United States)

    Srinivasan, Malur N.; Massarweh, W.; Hough, C. L.

    1992-01-01

    The following is a laboratory experiment designed to be performed by advanced-high school and beginning-college students. It is hoped that this experiment will create an interest in and further understanding of materials science. The objective of this experiment is to demonstrate that the microstructure of engineered materials is affected by the processing conditions in manufacture, and that it is possible to characterize the microstructure using image analysis with a computer. The principle of computer vision will first be introduced followed by the description of the system developed at Texas A&M University. This in turn will be followed by the description of the experiment to obtain differences in microstructure and the characterization of the microstructure using computer vision.

  1. Computer vision and machine learning for archaeology

    NARCIS (Netherlands)

    van der Maaten, L.J.P.; Boon, P.; Lange, G.; Paijmans, J.J.; Postma, E.

    2006-01-01

    Until now, computer vision and machine learning techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. The paper illustrates the use of computer vision techniques for

  2. Understanding and preventing computer vision syndrome.

    Science.gov (United States)

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  3. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  4. Computer vision syndrome: A review.

    Science.gov (United States)

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  5. Benchmarking Neuromorphic Vision: Lessons Learnt from Computer Vision

    Directory of Open Access Journals (Sweden)

    Cheston eTan

    2015-10-01

    Full Text Available Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, and algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  6. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  7. Computation and parallel implementation for early vision

    Science.gov (United States)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  8. MR imaging and computer vision

    International Nuclear Information System (INIS)

    Gerig, G.; Kikinis, R.; Kuoni, W.

    1989-01-01

    To parallel the rapid progress in MR data acquisition, the authors have developed advanced computer vision methods specifically adapted to the multidimensional and multispectral (T1- and T2-weighted) nature of MR data to extract, analyze, and visualize the morphologic properties of biologic tissues. A multistage image processing scheme is proposed, which performs the three-dimensional (3D) segmentation of the brain (gray and white matter) and ventricular system from two-echo MR volume data with only minimal user interaction. The quality of the segmentation demonstrates the high potential of MR acquisition along with 3D segmentation and 3D visualization for diagnosis, preoperative planning, and research. With segmentation, a fully quantitative 3D exploration is accessible

  9. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  10. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  11. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  12. Computer vision based room interior design

    Science.gov (United States)

    Ahmad, Nasir; Hussain, Saddam; Ahmad, Kashif; Conci, Nicola

    2015-12-01

    This paper introduces a new application of computer vision. To the best of the author's knowledge, it is the first attempt to incorporate computer vision techniques into room interior designing. The computer vision based interior designing is achieved in two steps: object identification and color assignment. The image segmentation approach is used for the identification of the objects in the room and different color schemes are used for color assignment to these objects. The proposed approach is applied to simple as well as complex images from online sources. The proposed approach not only accelerated the process of interior designing but also made it very efficient by giving multiple alternatives.

  13. A computer vision for animal ecology.

    Science.gov (United States)

    Weinstein, Ben G

    2017-11-07

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  14. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  15. Object categorization: computer and human vision perspectives

    National Research Council Canada - National Science Library

    Dickinson, Sven J

    2009-01-01

    .... The result of a series of four highly successful workshops on the topic, the book gathers many of the most distinguished researchers from both computer and human vision to reflect on their experience...

  16. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  17. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  18. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  19. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  20. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  1. Fulfilling the vision of automatic computing

    OpenAIRE

    Dobson, Simon; Sterritt, Roy; Nixon, Paddy; Hinchey, Mike

    2010-01-01

    Efforts since 2001 to design self-managing systems have yielded many impressive achievements, yet the original vision of autonomic computing remains unfulfilled. Researchers must develop a comprehensive systems engineering approach to create effective solutions for next-generation enterprise and sensor systems. Publisher PDF Peer reviewed

  2. Computational and cognitive neuroscience of vision

    CERN Document Server

    2017-01-01

    Despite a plethora of scientific literature devoted to vision research and the trend toward integrative research, the borders between disciplines remain a practical difficulty. To address this problem, this book provides a systematic and comprehensive overview of vision from various perspectives, ranging from neuroscience to cognition, and from computational principles to engineering developments. It is written by leading international researchers in the field, with an emphasis on linking multiple disciplines and the impact such synergy can lead to in terms of both scientific breakthroughs and technology innovations. It is aimed at active researchers and interested scientists and engineers in related fields.

  3. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  4. A computational model for dynamic vision

    Science.gov (United States)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  5. JPL Robotics Laboratory computer vision software library

    Science.gov (United States)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  6. Computational Vision Based on Neurobiology

    Science.gov (United States)

    1993-07-09

    comprised of 5 micro-panterns per row. The stimulus was presented with an SOA of 60ms. The continuous line shows direction disrimination when the micro...these pathways is debated 19. The including dancing, running walking, identity, gender and ventral pathway passes through the areas VI, V2, V4. even sign

  7. Eyesight quality and Computer Vision Syndrome.

    Science.gov (United States)

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  8. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  9. Photogrammetric computer vision statistics, geometry, orientation and reconstruction

    CERN Document Server

    Förstner, Wolfgang

    2016-01-01

    This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela­tions, tools that are useful also in the context of uncertain reasoning in po...

  10. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  11. Total Variation Applications in Computer Vision

    OpenAIRE

    Estrela, Vania V.; Magalhaes, Hermes Aguiar; Saotome, Osamu

    2016-01-01

    The objectives of this chapter are: (i) to introduce a concise overview of regularization; (ii) to define and to explain the role of a particular type of regularization called total variation norm (TV-norm) in computer vision tasks; (iii) to set up a brief discussion on the mathematical background of TV methods; and (iv) to establish a relationship between models and a few existing methods to solve problems cast as TV-norm. For the most part, image-processing algorithms blur the edges of the ...

  12. IOBSERVER: species recognition via computer vision

    OpenAIRE

    Martín Rodríguez, Fernando; Barral Martínez, Mónica; Besteiro Fernández, Ángel; Vilán Vilán, José Antonio

    2016-01-01

    This paper is about the design of an automated computer vision system that is able to recognize the species of fish individuals that are classified into a fishing vessel and produces a report file with that information. This system is called iObserver and it is a part of project Life-iSEAS (Life program).A very first version of the system has been tested at the oceanographic vessel “Miguel Oliver”. At the time of writing a more advanced prototype is being tested onboard other o...

  13. Nuclear fuel assembly identification using computer vision

    International Nuclear Information System (INIS)

    Moffett, S.D.

    1985-01-01

    This report describes an improved method of remotely identifying irradiated nuclear fuel assemblies. The method uses existing in-cell TV cameras to input an image of the notch-coded top of the fuel assemblies into a computer vision system, which then produces the identifying number for that assembly. This system replaces systems that use either a mechanical mechanism to feel the notches or use human operators to locate notches visually. The system was developed for identifying fuel assemblies from the Fast Flux Test Facility (FFTF) and the Clinch River Breeder Reactor, but could be used for other reactor assembly identification, as appropriate

  14. Colour vision and computer-generated images

    International Nuclear Information System (INIS)

    Ramek, Michael

    2010-01-01

    Colour vision deficiencies affect approximately 8% of the male and approximately 0.4% of the female population. In this work, it is demonstrated that computer generated images oftentimes pose unnecessary problems for colour deficient viewers. Three examples, the visualization of molecular structures, graphs of mathematical functions, and colour coded images from numerical data are used to identify problematic colour combinations: red/black, green/black, red/yellow, yellow/white, fuchsia/white, and aqua/white. Alternatives for these combinations are discussed.

  15. Computer vision for image-based transcriptomics.

    Science.gov (United States)

    Stoeger, Thomas; Battich, Nico; Herrmann, Markus D; Yakimovich, Yauhen; Pelkmans, Lucas

    2015-09-01

    Single-cell transcriptomics has recently emerged as one of the most promising tools for understanding the diversity of the transcriptome among single cells. Image-based transcriptomics is unique compared to other methods as it does not require conversion of RNA to cDNA prior to signal amplification and transcript quantification. Thus, its efficiency in transcript detection is unmatched by other methods. In addition, image-based transcriptomics allows the study of the spatial organization of the transcriptome in single cells at single-molecule, and, when combined with superresolution microscopy, nanometer resolution. However, in order to unlock the full power of image-based transcriptomics, robust computer vision of single molecules and cells is required. Here, we shortly discuss the setup of the experimental pipeline for image-based transcriptomics, and then describe in detail the algorithms that we developed to extract, at high-throughput, robust multivariate feature sets of transcript molecule abundance, localization and patterning in tens of thousands of single cells across the transcriptome. These computer vision algorithms and pipelines can be downloaded from: https://github.com/pelkmanslab/ImageBasedTranscriptomics. Copyright © 2015. Published by Elsevier Inc.

  16. Mahotas: Open source software for scriptable computer vision

    Directory of Open Access Journals (Sweden)

    Luis Pedro Coelho

    2013-07-01

    Full Text Available Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the scientific software ecosystem in this language and can leverage the existing infrastructure developed in that language. Mahotas is released under a liberal open source license (MIT License and is available from http://github.com/luispedro/mahotas and from the Python Package Index (http://pypi.python.org/pypi/mahotas. Tutorials and full API documentation are available online at http://mahotas.readthedocs.org/.

  17. Can computational goals inform theories of vision?

    Science.gov (United States)

    Anderson, Barton L

    2015-04-01

    One of the most lasting contributions of Marr's posthumous book is his articulation of the different "levels of analysis" that are needed to understand vision. Although a variety of work has examined how these different levels are related, there is comparatively little examination of the assumptions on which his proposed levels rest, or the plausibility of the approach Marr articulated given those assumptions. Marr placed particular significance on computational level theory, which specifies the "goal" of a computation, its appropriateness for solving a particular problem, and the logic by which it can be carried out. The structure of computational level theory is inherently teleological: What the brain does is described in terms of its purpose. I argue that computational level theory, and the reverse-engineering approach it inspires, requires understanding the historical trajectory that gave rise to functional capacities that can be meaningfully attributed with some sense of purpose or goal, that is, a reconstruction of the fitness function on which natural selection acted in shaping our visual abilities. I argue that this reconstruction is required to distinguish abilities shaped by natural selection-"natural tasks" -from evolutionary "by-products" (spandrels, co-optations, and exaptations), rather than merely demonstrating that computational goals can be embedded in a Bayesian model that renders a particular behavior or process rational. Copyright © 2015 Cognitive Science Society, Inc.

  18. A practical introduction to computer vision with OpenCV

    CERN Document Server

    Dawson-Howe, Kenneth

    2014-01-01

    Explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries Computer Vision is a rapidly expanding area and it is becoming progressively easier for developers to make use of this field due to the ready availability of high quality libraries (such as OpenCV 2).  This text is intended to facilitate the practical use of computer vision with the goal being to bridge the gap between the theory and the practical implementation of computer vision. The book will explain how to use the relevant OpenCV

  19. Local spatial frequency analysis for computer vision

    Science.gov (United States)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  20. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  1. Non-Boolean computing with nanomagnets for computer vision applications.

    Science.gov (United States)

    Bhanja, Sanjukta; Karunaratne, D K; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  2. Multifactorial Uncertainty Assessment for Monitoring Population Abundance using Computer Vision

    NARCIS (Netherlands)

    E.M.A.L. Beauxis-Aussalet (Emmanuelle); L. Hardman (Lynda)

    2015-01-01

    htmlabstractComputer vision enables in-situ monitoring of animal populations at a lower cost and with less ecosystem disturbance than with human observers. However, computer vision uncertainty may not be fully understood by end-users, and the uncertainty assessments performed by technology experts

  3. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  4. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  5. A Framework for Generic State Estimation in Computer Vision Applications

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    2001-01-01

    Experimenting and building integrated, operational systems in computational vision poses both theoretical and practical challenges, involving methodologies from control theory, statistics, optimization, computer graphics, and interaction. Consequently, a control and communication structure is needed

  6. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  7. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Science.gov (United States)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  8. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  9. Deep Learning for Computer Vision: A Brief Review

    Directory of Open Access Journals (Sweden)

    Athanasios Voulodimos

    2018-01-01

    Full Text Available Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  10. Deep Learning for Computer Vision: A Brief Review.

    Science.gov (United States)

    Voulodimos, Athanasios; Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  11. Deep Learning for Computer Vision: A Brief Review

    Science.gov (United States)

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  12. Specifying colours for colour vision testing using computer graphics.

    Science.gov (United States)

    Toufeeq, A

    2004-10-01

    This paper describes a novel test of colour vision using a standard personal computer, which is simple and reliable to perform. Twenty healthy individuals with normal colour vision and 10 healthy individuals with a red/green colour defect were tested binocularly at 13 selected points in the CIE (Commission International d'Eclairage, 1931) chromaticity triangle, representing the gamut of a computer monitor, where the x, y coordinates of the primary colour phosphors were known. The mean results from individuals with normal colour vision were compared to those with defective colour vision. Of the 13 points tested, five demonstrated consistently high sensitivity in detecting colour defects. The test may provide a convenient method for classifying colour vision abnormalities.

  13. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  14. Computer graphics visions and challenges: a European perspective.

    Science.gov (United States)

    Encarnação, José L

    2006-01-01

    I have briefly described important visions and challenges in computer graphics. They are a personal and therefore subjective selection. But most of these issues have to be addressed and solved--no matter if we call them visions or challenges or something else--if we want to make and further develop computer graphics into a key enabling technology for our IT-based society.

  15. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    Science.gov (United States)

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  16. THE PIXHAWK OPEN-SOURCE COMPUTER VISION FRAMEWORK FOR MAVS

    Directory of Open Access Journals (Sweden)

    L. Meier

    2012-09-01

    Full Text Available Unmanned aerial vehicles (UAV and micro air vehicles (MAV are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  17. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  18. An image-computable psychophysical spatial vision model.

    Science.gov (United States)

    Schütt, Heiko H; Wichmann, Felix A

    2017-10-01

    A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.

  19. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  20. Safety Computer Vision Rules for Improved Sensor Certification

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints....... The language allows developers to increase trustworthiness in the robot perception system, which we argue would increase compliance with safety standards. We demonstrate the usage of the language to improve reliability in a perception pipeline, thus allowing the vision expert to concisely express the safety...

  1. DIKU-LASMEA Workshop on Computer Vision, Copenhagen, March, 2009

    DEFF Research Database (Denmark)

    Fihl, Preben

    This report will cover the participation in the DIKU-LASMEA Workshop on Computer Vision held at the department of computer science, University of Copenhagen, in March 2009. The report will give a concise description of the topics presented at the workshop, and briefly discuss how the work relates...... to the HERMES project and human motion and action recognition....

  2. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    Background: Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. Aim: The aim of this study was to assess the prevalence ...

  3. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  4. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  5. Tracking by Identification Using Computer Vision and Radio

    Science.gov (United States)

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  6. State-Estimation Algorithm Based on Computer Vision

    Science.gov (United States)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  7. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  8. An Enduring Dialogue between Computational and Empirical Vision.

    Science.gov (United States)

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Computer vision approaches to medical image analysis. Revised papers

    International Nuclear Information System (INIS)

    Beichel, R.R.; Sonka, M.

    2006-01-01

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)

  10. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  11. OpenCV 3.0 computer vision with Java

    CERN Document Server

    Baggio, Daniel Lélis

    2015-01-01

    If you are a Java developer, student, researcher, or hobbyist wanting to create computer vision applications in Java then this book is for you. If you are an experienced C/C++ developer who is used to working with OpenCV, you will also find this book very useful for migrating your applications to Java. All you need is basic knowledge of Java, with no prior understanding of computer vision required, as this book will give you clear explanations and examples of the basics.

  12. Centaure: an heterogeneous parallel architecture for computer vision

    International Nuclear Information System (INIS)

    Peythieux, Marc

    1997-01-01

    This dissertation deals with the architecture of parallel computers dedicated to computer vision. In the first chapter, the problem to be solved is presented, as well as the architecture of the Sympati and Symphonie computers, on which this work is based. The second chapter is about the state of the art of computers and integrated processors that can execute computer vision and image processing codes. The third chapter contains a description of the architecture of Centaure. It has an heterogeneous structure: it is composed of a multiprocessor system based on Analog Devices ADSP21060 Sharc digital signal processor, and of a set of Symphonie computers working in a multi-SIMD fashion. Centaure also has a modular structure. Its basic node is composed of one Symphonie computer, tightly coupled to a Sharc thanks to a dual ported memory. The nodes of Centaure are linked together by the Sharc communication links. The last chapter deals with a performance validation of Centaure. The execution times on Symphonie and on Centaure of a benchmark which is typical of industrial vision, are presented and compared. In the first place, these results show that the basic node of Centaure allows a faster execution than Symphonie, and that increasing the size of the tested computer leads to a better speed-up with Centaure than with Symphonie. In the second place, these results validate the choice of running the low level structure of Centaure in a multi- SIMD fashion. (author) [fr

  13. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  14. X-ray machine vision and computed tomography

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    This survey examines how 2-D x-ray machine vision and 3-D computed tomography will be used in industry in the 1988-1995 timeframe. Specific applications are described and rank-ordered in importance. The types of companies selling and using 2-D and 3-D systems are profiled, and markets are forecast for 1988 to 1995. It is known that many machine vision and automation companies are now considering entering this field. This report looks at the potential pitfalls and whether recent market problems similar to those recently experienced by the machine vision industry will likely occur in this field. FTS will publish approximately 100 other surveys in 1988 on emerging technology in the fields of AI, manufacturing, computers, sensors, photonics, energy, bioengineering, and materials

  15. Iris features-based heart disease diagnosis by computer vision

    Science.gov (United States)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  16. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    Science.gov (United States)

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  17. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  18. A Knowledge-Intensive Approach to Computer Vision Systems

    NARCIS (Netherlands)

    Koenderink-Ketelaars, N.J.J.P.

    2010-01-01

    This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing

  19. Spatially invariant computations in stereoscopic vision.

    Science.gov (United States)

    Vidal-Naquet, Michel; Gepshtein, Sergei

    2012-01-01

    PERCEPTION OF STEREOSCOPIC DEPTH REQUIRES THAT VISUAL SYSTEMS SOLVE A CORRESPONDENCE PROBLEM: find parts of the left-eye view of the visual scene that correspond to parts of the right-eye view. The standard model of binocular matching implies that similarity of left and right images is computed by inter-ocular correlation. But the left and right images of the same object are normally distorted relative to one another by the binocular projection, in particular when slanted surfaces are viewed from close distance. Correlation often fails to detect correct correspondences between such image parts. We investigate a measure of inter-ocular similarity that takes advantage of spatially invariant computations similar to the computations performed by complex cells in biological visual systems. This measure tolerates distortions of corresponding image parts and yields excellent performance over a much larger range of surface slants than the standard model. The results suggest that, rather than serving as disparity detectors, multiple binocular complex cells take part in the computation of inter-ocular similarity, and that visual systems are likely to postpone commitment to particular binocular disparities until later stages in the visual process.

  20. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  1. Grid computing : enabling a vision for collaborative research

    International Nuclear Information System (INIS)

    von Laszewski, G.

    2002-01-01

    In this paper the authors provide a motivation for Grid computing based on a vision to enable a collaborative research environment. The authors vision goes beyond the connection of hardware resources. They argue that with an infrastructure such as the Grid, new modalities for collaborative research are enabled. They provide an overview showing why Grid research is difficult, and they present a number of management-related issues that must be addressed to make Grids a reality. They list projects that provide solutions to subsets of these issues

  2. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  3. Review: computer vision applied to the inspection and quality control of fruits and vegetables

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-12-01

    Full Text Available This is a review of the current existing literature concerning the inspection of fruits and vegetables with the application of computer vision, where the techniques most used to estimate various properties related to quality are analyzed. The objectives of the typical applications of such systems include the classification, quality estimation according to the internal and external characteristics, supervision of fruit processes during storage or the evaluation of experimental treatments. In general, computer vision systems do not only replace manual inspection, but can also improve their skills. In conclusion, computer vision systems are powerful tools for the automatic inspection of fruits and vegetables. In addition, the development of such systems adapted to the food industry is fundamental to achieve competitive advantages.

  4. A computer vision based candidate for functional balance test.

    Science.gov (United States)

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  5. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  6. Computer vision for quality grading in fish processing

    OpenAIRE

    Misimi, Ekrem

    2007-01-01

    High labour costs, due to the existing technology that still involves a high degree of manually based processing, incur overall high production costs in the fish processing industry. Therefore, a higher degree of automation of processing lines is often desirable, and this strategy has been adopted by the Norwegian fish processing industry to cut-down production costs. In fish processing, despite a slower uptake than in other domains of industry, the use of computer vision as a strategy for au...

  7. Computer vision analysis captures atypical attention in toddlers with autism.

    Science.gov (United States)

    Campbell, Kathleen; Carpenter, Kimberly Lh; Hashemi, Jordan; Espinosa, Steven; Marsan, Samuel; Borg, Jana Schaich; Chang, Zhuoqing; Qiu, Qiang; Vermeer, Saritha; Adler, Elizabeth; Tepper, Mariano; Egger, Helen L; Baker, Jeffery P; Sapiro, Guillermo; Dawson, Geraldine

    2018-03-01

    To demonstrate the capability of computer vision analysis to detect atypical orienting and attention behaviors in toddlers with autism spectrum disorder. One hundered and four toddlers of 16-31 months old (mean = 22) participated in this study. Twenty-two of the toddlers had autism spectrum disorder and 82 had typical development or developmental delay. Toddlers watched video stimuli on a tablet while the built-in camera recorded their head movement. Computer vision analysis measured participants' attention and orienting in response to name calls. Reliability of the computer vision analysis algorithm was tested against a human rater. Differences in behavior were analyzed between the autism spectrum disorder group and the comparison group. Reliability between computer vision analysis and human coding for orienting to name was excellent (intra-class coefficient 0.84, 95% confidence interval 0.67-0.91). Only 8% of toddlers with autism spectrum disorder oriented to name calling on >1 trial, compared to 63% of toddlers in the comparison group (p = 0.002). Mean latency to orient was significantly longer for toddlers with autism spectrum disorder (2.02 vs 1.06 s, p = 0.04). Sensitivity for autism spectrum disorder of atypical orienting was 96% and specificity was 38%. Older toddlers with autism spectrum disorder showed less attention to the videos overall (p = 0.03). Automated coding offers a reliable, quantitative method for detecting atypical social orienting and reduced sustained attention in toddlers with autism spectrum disorder.

  8. Computer Vision for the Solar Dynamics Observatory (SDO)

    Science.gov (United States)

    Martens, P. C. H.; Attrill, G. D. R.; Davey, A. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Savcheva, A.; Su, Y.; Testa, P.; Wills-Davey, M.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F.; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgoulis, M. K.; McAteer, R. T. J.; Timmons, R. P.

    2012-01-01

    In Fall 2008 NASA selected a large international consortium to produce a comprehensive automated feature-recognition system for the Solar Dynamics Observatory (SDO). The SDO data that we consider are all of the Atmospheric Imaging Assembly (AIA) images plus surface magnetic-field images from the Helioseismic and Magnetic Imager (HMI). We produce robust, very efficient, professionally coded software modules that can keep up with the SDO data stream and detect, trace, and analyze numerous phenomena, including flares, sigmoids, filaments, coronal dimmings, polarity inversion lines, sunspots, X-ray bright points, active regions, coronal holes, EIT waves, coronal mass ejections (CMEs), coronal oscillations, and jets. We also track the emergence and evolution of magnetic elements down to the smallest detectable features and will provide at least four full-disk, nonlinear, force-free magnetic field extrapolations per day. The detection of CMEs and filaments is accomplished with Solar and Heliospheric Observatory (SOHO)/ Large Angle and Spectrometric Coronagraph (LASCO) and ground-based Hα data, respectively. A completely new software element is a trainable feature-detection module based on a generalized image-classification algorithm. Such a trainable module can be used to find features that have not yet been discovered (as, for example, sigmoids were in the pre- Yohkoh era). Our codes will produce entries in the Heliophysics Events Knowledgebase (HEK) as well as produce complete catalogs for results that are too numerous for inclusion in the HEK, such as the X-ray bright-point metadata. This will permit users to locate data on individual events as well as carry out statistical studies on large numbers of events, using the interface provided by the Virtual Solar Observatory. The operations concept for our computer vision system is that the data will be analyzed in near real time as soon as they arrive at the SDO Joint Science Operations Center and have undergone basic

  9. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Silvia Matiacevich

    2013-01-01

    Full Text Available Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days, temperature (4 and 15°C, and relative humidity (75 and 90%. Results. Significant differences (P<0.05 were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red and fungal presence (from 0 to 15%, both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements.

  10. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  11. Computer vision research at Marshall Space Flight Center

    Science.gov (United States)

    Vinz, Frank L.

    1990-01-01

    Orbital docking, inspection, and sevicing are operations which have the potential for capability enhancement as well as cost reduction for space operations by the application of computer vision technology. Research at MSFC has been a natural outgrowth of orbital docking simulations for remote manually controlled vehicles such as the Teleoperator Retrieval System and the Orbital Maneuvering Vehicle (OMV). Baseline design of the OMV dictates teleoperator control from a ground station. This necessitates a high data-rate communication network and results in several seconds of time delay. Operational costs and vehicle control difficulties could be alleviated by an autonomous or semi-autonomous control system onboard the OMV which would be based on a computer vision system having capability to recognize video images in real time. A concept under development at MSFC with these attributes is based on syntactic pattern recognition. It uses tree graphs for rapid recognition of binary images of known orbiting target vehicles. This technique and others being investigated at MSFC will be evaluated in realistic conditions by the use of MSFC orbital docking simulators. Computer vision is also being applied at MSFC as part of the supporting development for Work Package One of Space Station Freedom.

  12. Atoms of recognition in human and computer vision.

    Science.gov (United States)

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  13. Honey characterization using computer vision system and artificial neural networks.

    Science.gov (United States)

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Computer vision syndrome and ergonomic practices among undergraduate university students.

    Science.gov (United States)

    Mowatt, Lizette; Gordon, Carron; Santosh, Arvind Babu Rajendra; Jones, Thaon

    2018-01-01

    To determine the prevalence of computer vision syndrome (CVS) and ergonomic practices among students in the Faculty of Medical Sciences at The University of the West Indies (UWI), Jamaica. A cross-sectional study was done with a self-administered questionnaire. Four hundred and nine students participated; 78% were females. The mean age was 21.6 years. Neck pain (75.1%), eye strain (67%), shoulder pain (65.5%) and eye burn (61.9%) were the most common CVS symptoms. Dry eyes (26.2%), double vision (28.9%) and blurred vision (51.6%) were the least commonly experienced symptoms. Eye burning (P = .001), eye strain (P = .041) and neck pain (P = .023) were significantly related to level of viewing. Moderate eye burning (55.1%) and double vision (56%) occurred in those who used handheld devices (P = .001 and .007, respectively). Moderate blurred vision was reported in 52% who looked down at the device compared with 14.8% who held it at an angle. Severe eye strain occurred in 63% of those who looked down at a device compared with 21% who kept the device at eye level. Shoulder pain was not related to pattern of use. Ocular symptoms and neck pain were less likely if the device was held just below eye level. There is a high prevalence of Symptoms of CVS amongst university students which could be reduced, in particular neck pain and eye strain and burning, with improved ergonomic practices. © 2017 John Wiley & Sons Ltd.

  15. Computer vision syndrome: a review of ocular causes and potential treatments.

    Science.gov (United States)

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  16. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  17. Computer-vision-based inspecting system for needle roller bearing

    Science.gov (United States)

    Li, Wei; He, Tao; Zhong, Fei; Wu, Qinhua; Zhong, Yuning; Shi, Teiling

    2006-11-01

    A Computer Vision based Inspecting System for Needle Roller Bearing (CVISNRB) is proposed in the paper. The characteristic of technology, main functions and principle of CVISNRB are also introduced. CVISNRB is composed of a mechanic transmission and an automatic feeding system, an imaging system, software arithmetic, an automatic selecting system of inspected bearing, a human-computer interaction, a pneumatic control system, an electric control system and so on. The computer vision technique is introduced in the inspecting system for needle roller bearing, which resolves the problem of the small needle roller bearing inspecting in bearing production business enterprise, raises the speed of the inspecting, and realizes the automatic untouched and on-line examination. The CVISNRB can effectively examine the loss of needle and give the accurate number. The accuracy can achieve 99.5%, and the examination speed can arrive 15 needle roller bearings each minute. The CVISNRB has none malfunction in the actual performance in the past half year, and can meet the actual need.

  18. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  19. Computational vision systems for the detection of malignant melanoma.

    Science.gov (United States)

    Maglogiannis, Ilias; Kosmopoulos, Dimitrios I

    2006-01-01

    In recent years, computational vision-based diagnostic systems for dermatology have demonstrated significant progress. We review these systems by first presenting the installation, visual features utilized for skin lesion classification and the methods for defining them. We also describe how to extract these features through digital image processing methods, i.e. segmentation, registration, border detection, color and texture processing, and present how to use the extracted features for skin lesion classification by employing artificial intelligence methods, i.e. discriminant analysis, neural networks, and support vector machines. Finally, we compare these techniques in discriminating malignant melanoma tumors versus dysplastic naevi lesions.

  20. Shape perception in human and computer vision an interdisciplinary perspective

    CERN Document Server

    Dickinson, Sven J

    2013-01-01

    This comprehensive and authoritative text/reference presents a unique, multidisciplinary perspective on Shape Perception in Human and Computer Vision. Rather than focusing purely on the state of the art, the book provides viewpoints from world-class researchers reflecting broadly on the issues that have shaped the field. Drawing upon many years of experience, each contributor discusses the trends followed and the progress made, in addition to identifying the major challenges that still lie ahead. Topics and features: examines each topic from a range of viewpoints, rather than promoting a speci

  1. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  2. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  3. Neural networks and neuroscience-inspired computer vision.

    Science.gov (United States)

    Cox, David Daniel; Dean, Thomas

    2014-09-22

    Brains are, at a fundamental level, biological computing machines. They transform a torrent of complex and ambiguous sensory information into coherent thought and action, allowing an organism to perceive and model its environment, synthesize and make decisions from disparate streams of information, and adapt to a changing environment. Against this backdrop, it is perhaps not surprising that computer science, the science of building artificial computational systems, has long looked to biology for inspiration. However, while the opportunities for cross-pollination between neuroscience and computer science are great, the road to achieving brain-like algorithms has been long and rocky. Here, we review the historical connections between neuroscience and computer science, and we look forward to a new era of potential collaboration, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods. In particular, we explore where neuroscience-inspired algorithms have succeeded, where they still fail, and we identify areas where deeper connections are likely to be fruitful. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Computer vision uncovers predictors of physical urban change.

    Science.gov (United States)

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  5. TO STUDY THE ROLE OF ERGONOMICS IN THE MANAGEMENT OF COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Anshu

    2016-03-01

    Full Text Available INTRODUCTION Ergonomics is the science of designing the job equipment and workplace to fit the worker by obtaining a correct match between the human body, work related tasks and work tools. By applying the science of ergonomics we can reduce the difficulties faced by computer users. OBJECTIVES To evaluate the efficacy of tear substitutes and the role of ergonomics in the management of Computer Vision Syndrome. Development of counseling plan, initial treatment plan, prevent complications and educate the subjects about the disease process and to enhance public awareness. MATERIALS AND METHODS A minimum of 100 subjects were selected randomly irrespective of gender, place and nature of computer work & ethnic differences. The subjects were between age group of 10-60 years who had been using the computer for a minimum of 2 hours/day for atleast 5-6 days a week. The subjects underwent tests like Schirmer's, Test film breakup time (TBUT, Inter Blink Interval and Ocular surface staining. A Computer Vision score was taken out based on 5 symptoms each of which was given a score of 2. The symptoms included foreign body sensation, redness, eyestrain, blurring of vision and frequent change in refraction. The score of more than 6 was treated as Computer Vision syndrome and the subjects underwent synoptophore tests and refraction. RESULT In the present study where we had divided 100 subjects into 2 groups of 50 each and given tear substitutes only in one group and ergonomics was considered with tear substitutes in the other. We saw that there was more improvement after 4 weeks and 8 weeks in the group taking lubricants and ergonomics into consideration than lubricants alone. More improvement was seen in eyestrain and blurring (P0.05. CONCLUSION Advanced training in proper computer usage can decrease discomfort.

  6. Implementation of Water Quality Management by Fish School Detection Based on Computer Vision Technology

    OpenAIRE

    Yan Hou

    2015-01-01

    To solve the detection of abnormal water quality, this study proposed a biological water abnormity detection method based on computer vision technology combined with Support Vector Machine (SVM). First, computer vision is used to acquire the parameters of fish school motion feature which can reflect the water quality and then these parameters were preprocessed. Next, the sample set is established and the water quality abnormity monitoring model based on computer vision technology combined wit...

  7. Computer vision-based automatic beverage dispenser prototype for user experience studies

    OpenAIRE

    Merchán, Fernando; Valderrama, Elba; Poveda, Martín

    2017-01-01

    This paper presents several aspects of the implementation of a prototype of automatic beverage dispenser with computer vision functionalities. The system presents touchless technologies including face recognition for user identification and hand gesture recognition for beverage selection. This prototype is a test platform to explore the acceptance of these technologies by consumers and to compare it with other technologies such as touch screens. We present both the technical aspects of the de...

  8. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  9. Automated cutting in the food industry using computer vision

    KAUST Repository

    Daley, Wayne D R

    2012-01-01

    The processing of natural products has posed a significant problem to researchers and developers involved in the development of automation. The challenges have come from areas such as sensing, grasping and manipulation, as well as product-specific areas such as cutting and handling of meat products. Meat products are naturally variable and fixed automation is at its limit as far as its ability to accommodate these products. Intelligent automation systems (such as robots) are also challenged, mostly because of a lack of knowledge of the physical characteristic of the individual products. Machine vision has helped to address some of these shortcomings but underperforms in many situations. Developments in sensors, software and processing power are now offering capabilities that will help to make more of these problems tractable. In this chapter we will describe some of the developments that are underway in terms of computer vision for meat product applications, the problems they are addressing and potential future trends. © 2012 Woodhead Publishing Limited All rights reserved.

  10. Computer vision syndrome and associated factors among medical and engineering students in chennai.

    Science.gov (United States)

    Logaraj, M; Madhupriya, V; Hegde, Sk

    2014-03-01

    Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P engineering students compared with medical students. Students who used computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer.

  11. Computer Vision and Computer Graphics Analysis of Paintings and Drawings: An Introduction to the Literature

    Science.gov (United States)

    Stork, David G.

    In the past few years, a number of scholars trained in computer vision, pattern recognition, image processing, computer graphics, and art history have developed rigorous computer methods for addressing an increasing number of problems in the history of art. In some cases, these computer methods are more accurate than even highly trained connoisseurs, art historians and artists. Computer graphics models of artists’ studios and subjects allow scholars to explore ‘‘what if’’ scenarios and determine artists’ studio praxis. Rigorous computer ray-tracing software sheds light on claims that some artists employed optical tools. Computer methods will not replace tradition art historical methods of connoisseurship but enhance and extend them. As such, for these computer methods to be useful to the art community, they must continue to be refined through application to a variety of significant art historical problems.

  12. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  13. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  14. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  15. Computer Vision Aided Measurement of Morphological Features in Medical Optics

    Directory of Open Access Journals (Sweden)

    Bogdana Bologa

    2010-09-01

    Full Text Available This paper presents a computer vision aided method for non invasive interupupillary (IPD distance measurement. IPD is a morphological feature requirement in any oftalmological frame prescription. A good frame prescription is highly dependent nowadays on accurate IPD estimation in order for the lenses to be eye strain free. The idea is to replace the ruler or the pupilometer with a more accurate method while keeping the patient eye free from any moving or gaze restrictions. The method proposed in this paper uses a video camera and a punctual light source in order to determine the IPD with under millimeter error. The results are compared against standard eye and object detection routines from literature.

  16. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M

    2014-01-01

    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  17. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  18. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  19. Identification of cichlid fishes from Lake Malawi using computer vision.

    Science.gov (United States)

    Joo, Deokjin; Kwan, Ye-seul; Song, Jongwoo; Pinho, Catarina; Hey, Jody; Won, Yong-Jin

    2013-01-01

    The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes) with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species.

  20. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    Science.gov (United States)

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  1. Computer vision tools to optimize reconstruction parameters in x-ray in-line phase tomography

    International Nuclear Information System (INIS)

    Rositi, H; Frindel, C; Wiart, M; Langer, M; Olivier, C; Peyrin, F; Rousseau, D

    2014-01-01

    In this article, a set of three computer vision tools, including scale invariant feature transform (SIFT), a measure of focus, and a measure based on tractography are demonstrated to be useful in replacing the eye of the expert in the optimization of the reconstruction parameters in x-ray in-line phase tomography. We demonstrate how these computer vision tools can be used to inject priors on the shape and scale of the object to be reconstructed. This is illustrated with the Paganin single intensity image phase retrieval algorithm in heterogeneous soft tissues of biomedical interest, where the selection of the reconstruction parameters was previously made from visual inspection or physical assumptions on the composition of the sample. (paper)

  2. Computer vision tools to optimize reconstruction parameters in x-ray in-line phase tomography

    Science.gov (United States)

    Rositi, H.; Frindel, C.; Wiart, M.; Langer, M.; Olivier, C.; Peyrin, F.; Rousseau, D.

    2014-12-01

    In this article, a set of three computer vision tools, including scale invariant feature transform (SIFT), a measure of focus, and a measure based on tractography are demonstrated to be useful in replacing the eye of the expert in the optimization of the reconstruction parameters in x-ray in-line phase tomography. We demonstrate how these computer vision tools can be used to inject priors on the shape and scale of the object to be reconstructed. This is illustrated with the Paganin single intensity image phase retrieval algorithm in heterogeneous soft tissues of biomedical interest, where the selection of the reconstruction parameters was previously made from visual inspection or physical assumptions on the composition of the sample.

  3. Recent advances in transient imaging: A computer graphics and vision perspective

    Directory of Open Access Journals (Sweden)

    Adrian Jarabo

    2017-03-01

    Full Text Available Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation. Keywords: Transient imaging, Ultrafast imaging, Time-of-flight

  4. The computer vision in the service of safety and reliability in steam generators inspection services

    International Nuclear Information System (INIS)

    Pineiro Fernandez, P.; Garcia Bueno, A.; Cabrera Jordan, E.

    2012-01-01

    The actual computational vision has matured very quickly in the last ten years by facilitating new developments in various areas of nuclear application allowing to automate and simplify processes and tasks, instead or in collaboration with the people and equipment efficiently. The current computer vision (more appropriate than the artificial vision concept) provides great possibilities of also improving in terms of the reliability and safety of NPPS inspection systems.

  5. Blink rate, incomplete blinks and computer vision syndrome.

    Science.gov (United States)

    Portello, Joan K; Rosenfield, Mark; Chu, Christina A

    2013-05-01

    Computer vision syndrome (CVS), a highly prevalent condition, is frequently associated with dry eye disorders. Furthermore, a reduced blink rate has been observed during computer use. The present study examined whether post task ocular and visual symptoms are associated with either a decreased blink rate or a higher prevalence of incomplete blinks. An additional trial tested whether increasing the blink rate would reduce CVS symptoms. Subjects (N = 21) were required to perform a continuous 15-minute reading task on a desktop computer at a viewing distance of 50 cm. Subjects were videotaped during the task to determine their blink rate and amplitude. Immediately after the task, subjects completed a questionnaire regarding ocular symptoms experienced during the trial. In a second session, the blink rate was increased by means of an audible tone that sounded every 4 seconds, with subjects being instructed to blink on hearing the tone. The mean blink rate during the task without the audible tone was 11.6 blinks per minute (SD, 7.84). The percentage of blinks deemed incomplete for each subject ranged from 0.9 to 56.5%, with a mean of 16.1% (SD, 15.7). A significant positive correlation was observed between the total symptom score and the percentage of incomplete blinks during the task (p = 0.002). Furthermore, a significant negative correlation was noted between the blink score and symptoms (p = 0.035). Increasing the mean blink rate to 23.5 blinks per minute by means of the audible tone did not produce a significant change in the symptom score. Whereas CVS symptoms are associated with a reduced blink rate, the completeness of the blink may be equally significant. Because instructing a patient to increase his or her blink rate may be ineffective or impractical, actions to achieve complete corneal coverage during blinking may be more helpful in alleviating symptoms during computer operation.

  6. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    Science.gov (United States)

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  7. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  8. The face of an imposter : Computer Vision for Deception Detection Research in Progress

    NARCIS (Netherlands)

    Elkins, Aaron C.; Sun, Yijia; Zafeiriou, Stefanos; Pantic, Maja; Jensen, Matthew; Meservy, Thomas; Burgoon, Judee; Nunamaker, Jay

    2013-01-01

    Using video analyzed from a novel deception experiment, this paper introduces computer vision research in progress that addresses two critical components to computational modeling of deceptive behavior: 1) individual nonverbal behavior differences, and 2) deceptive ground truth. Video interviews

  9. Computer vision inspection of rice seed quality with discriminant analysis

    Science.gov (United States)

    Cheng, Fang; Ying, Yibin

    2004-10-01

    This study was undertaken to develop computer vision-based rice seeds inspection technology for quality control. Color image classification using a discriminant analysis algorithm identifying germinated rice seed was successfully implemented. The hybrid rice seed cultivars involved were Jinyou402, Shanyou10, Zhongyou207 and Jiayou99. Sixteen morphological features and six color features were extracted from sample images belong to training sets. The color feature of 'Huebmean' shows the strongest classification ability among all the features. Computed as the area of seed region divided by area of the smallest convex polygon that can contain the seed region, the feature of 'Solidity' is prior to the other morphological features in germinated seeds recognition. Combined with the two features of 'Huebmean' and 'Solidity', discriminant analysis was used to classify normal rice seeds and seeds germinated on panicle. Results show that the algorithm achieved an overall average accuracy of 98.4% for both of normal seeds and germinated seeds in all cultivars. The combination of 'Huebmean' and 'Solidity' was proved to be a good indicator for germinated seeds. The simple discriminant algorithm using just two features shows high accuracy and good adaptability.

  10. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  11. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    International Nuclear Information System (INIS)

    Lin, Linglong; Song, Yuntao; Yang, Yang; Feng, Hansheng; Cheng, Yong; Pan, Hongtao

    2015-01-01

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  12. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  13. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    Science.gov (United States)

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  14. Computer Vision Malaria Diagnostic Systems—Progress and Prospects

    Directory of Open Access Journals (Sweden)

    Joseph Joel Pollak

    2017-08-01

    Full Text Available Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  15. Traffic light detection and intersection crossing using mobile computer vision

    Science.gov (United States)

    Grewei, Lynne; Lagali, Christopher

    2017-05-01

    The solution for Intersection Detection and Crossing to support the development of blindBike an assisted biking system for the visually impaired is discussed. Traffic light detection and intersection crossing are key needs in the task of biking. These problems are tackled through the use of mobile computer vision, in the form of a mobile application on an Android phone. This research builds on previous Traffic Light detection algorithms with a focus on efficiency and compatibility on a resource-limited platform. Light detection is achieved through blob detection algorithms utilizing training data to detect patterns of Red, Green and Yellow in complex real world scenarios where multiple lights may be present. Also, issues of obscurity and scale are addressed. Safe Intersection crossing in blindBike is also discussed. This module takes a conservative "assistive" technology approach. To achieve this blindBike use's not only the Android device but, an external bike cadence Bluetooth/Ant enabled sensor. Real world testing results are given and future work is discussed.

  16. Prediction of pork color attributes using computer vision system.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (P<0.0001) for L* (0.91), a* (0.80), and b* (0.66). Two comparable regression models (linear and stepwise) were used to evaluate prediction results of pork color attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Identification of double-yolked duck egg using computer vision.

    Directory of Open Access Journals (Sweden)

    Long Ma

    Full Text Available The double-yolked (DY egg is quite popular in some Asian countries because it is considered as a sign of good luck, however, the double yolk is one of the reasons why these eggs fail to hatch. The usage of automatic methods for identifying DY eggs can increase the efficiency in the poultry industry by decreasing egg loss during incubation or improving sale proceeds. In this study, two methods for DY duck egg identification were developed by using computer vision technology. Transmittance images of DY and single-yolked (SY duck eggs were acquired by a CCD camera to identify them according to their shape features. The Fisher's linear discriminant (FLD model equipped with a set of normalized Fourier descriptors (NFDs extracted from the acquired images and the convolutional neural network (CNN model using primary preprocessed images were built to recognize duck egg yolk types. The classification accuracies of the FLD model for SY and DY eggs were 100% and 93.2% respectively, while the classification accuracies of the CNN model for SY and DY eggs were 98% and 98.8% respectively. The CNN-based algorithm took about 0.12 s to recognize one sample image, which was slightly faster than the FLD-based (about 0.20 s. Finally, this work compared two classification methods and provided the better method for DY egg identification.

  18. 24 CFR 220.822 - Claim computation; items included.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Claim computation; items included. 220.822 Section 220.822 Housing and Urban Development Regulations Relating to Housing and Urban... computation; items included. (a) Assignment of loan. Upon an acceptable assignment of the note and security...

  19. Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision?

    OpenAIRE

    Kruger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodriguez-Sanchez, Antonio J.; Wiskott, Laurenz

    2013-01-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition or vision-based navigation and manipulation. This article reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer ...

  20. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L. P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  1. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision

    OpenAIRE

    Warren, William H.

    2012-01-01

    David Marr’s (1982) book Vision attempted to formulate a thoroughgoing formal theory of perception. Marr borrowed much of the “computational” level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information is prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet as a research program leading to a computational model of human vision, Marr’s program did not succeed. This article asks why, using th...

  2. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  3. Learning openCV computer vision with the openCV library

    CERN Document Server

    Bradski, Gary

    2008-01-01

    Learning OpenCV puts you right in the middle of the rapidly expanding field of computer vision. Written by the creators of OpenCV, the widely used free open-source library, this book introduces you to computer vision and demonstrates how you can quickly build applications that enable computers to see" and make decisions based on the data. With this book, any developer or hobbyist can get up and running with the framework quickly, whether it's to build simple or sophisticated vision applications

  4. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  5. Dynamic programming and graph algorithms in computer vision.

    Science.gov (United States)

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  6. Computing for magnetic fusion energy research: An updated vision

    International Nuclear Information System (INIS)

    Henline, P.; Giarrusso, J.; Davis, S.; Casper, T.

    1993-01-01

    This Fusion Computing Council perspective is written to present the primary of the fusion computing community at the time of publication of the report necessarily as a summary of the information contained in the individual sections. These concerns reflect FCC discussions during final review of contributions from the various working groups and portray our latest information. This report itself should be considered as dynamic, requiring periodic updating in an attempt to track rapid evolution of the computer industry relevant to requirements for magnetic fusion research. The most significant common concern among the Fusion Computing Council working groups is networking capability. All groups see an increasing need for network services due to the use of workstations, distributed computing environments, increased use of graphic services, X-window usage, remote experimental collaborations, remote data access for specific projects and other collaborations. Other areas of concern include support for workstations, enhanced infrastructure to support collaborations, the User Service Centers, NERSC and future massively parallel computers, and FCC sponsored workshops

  7. Computer vision for robots; Proceedings of the Meeting, Cannes, France, December 2-6, 1985

    Science.gov (United States)

    Faugeras, O. D. (Editor); Kelley, R. B. (Editor)

    1986-01-01

    The conference presents papers on segmentation techniques, three-dimensional recognition and representation, processing image sequences, and navigation and mobility. Particular attention is given to determining the pose of an object, adaptive least squares correlation with geometrical constraints, and the reliable formation of feature vectors for two-dimensional shape representation. Other topics include the real-time tracking of a target moving on a natural textured background, computer vision for the guidance of roving robots, and integrating sensory data for object recognition tasks.

  8. Furnance grate monitoring by computer vision; Rosteroevervakning med bildanalys

    Energy Technology Data Exchange (ETDEWEB)

    Blom, Elisabet; Gustafsson, Bengt; Olsson, Magnus

    2005-01-01

    During the last couple of year's computer vision has developed a lot beside computers and video technic. This makes it technical and economical possible to use cameras as a monitoring instrument. The first experiments with this type of equipment were made in the early 1990s. Most of the experiments were made to measure the bed length from the back of the grate. In this experiment the cameras were mounted in the front instead. The highest priority was to detect the topography of the fuel bed. An uneven fuel bed means combustion with local temperature variations that do the combustion more difficult to control. The goal was to show possibilities to measure fuel bed highs, particle size and combustion intensity or the combustion spreading with pictures from one or two cameras. The test was done in a bark-fuelled boiler in Karlsborg because that boiler has doors from the fuel feeding side suitable for looking down on the grate. The results shows that the cameras mounting that were done in Karlsborg were not good enough to do a 3D calculation of the fuel bed. It was however possible to se the drying and it was possible to see the flames in the pictures. To see the flames and steam without over exposure because of different light in different points, it is possible to use a filter or an on linear sensibility camera. To test if a parallel mounting of the two cameras would work a cold test were done in the grate test facility at KMW in Norrtaelje. With the pictures from this test we were able to do 3D measurements of the bed topography. The conclusions are that it is possible to measure bed height and bed topography with other camera positions than we were able to use in this experiment. The particle size is easier to measure before entering the boiler for examples over a rim were the particles falling down. It is also possible to estimate a temperature zone were the steam goes off.

  9. Crossing the divide between computer vision and data bases in search of image data bases

    NARCIS (Netherlands)

    Worring, M.; Smeulders, A.W.M.; Ioannidis, Y.; Klas, W.

    1998-01-01

    Image databases call upon the combined effort of computing vision and database technology to advance beyond exemplary systems. In this paper we charter several areas for mutually beneficial research activities and provide an architectural design to accommodate it.

  10. Biophysics of the Eye in Computer Vision: Methods and Advanced Technologies

    Science.gov (United States)

    Hammoud, Riad I.; Hansen, Dan Witzner

    The eyes have it! This chapter describes cutting-edge computer vision methods employed in advanced vision sensing technologies for medical, safety, and security applications, where the human eye represents the object of interest for both the imager and the computer. A camera receives light from the real eye to form a sequence of digital images of it. As the eye scans the environment, or focuses on particular objects in the scene, the computer simultaneously localizes the eye position, tracks its movement over time, and infers measures such as the attention level, and the gaze direction in real time and fully automatic. The main focus of this chapter is on computer vision and pattern recognition algorithms for eye appearance variability modeling, automatic eye detection, and robust eye position tracking. This chapter offers good readings and solid methodologies to build the two fundamental low-level building blocks of a vision-based eye tracking technology.

  11. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Science.gov (United States)

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  12. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Directory of Open Access Journals (Sweden)

    Anyela Camargo

    Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  13. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are

  14. Computer Vision Utilization for Detection of Green House Tomato under Natural Illumination

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2013-02-01

    Full Text Available Agricultural sector experiences the application of automated systems since two decades ago. These systems are applied to harvest fruits in agriculture. Computer vision is one of the technologies that are most widely used in food industries and agriculture. In this paper, an automated system based on computer vision for harvesting greenhouse tomatoes is presented. A CCD camera takes images from workspace and tomatoes with over 50 percent ripeness are detected through an image processing algorithm. In this research three color spaces including RGB, HSI and YCbCr and three algorithms including threshold recognition, curvature of the image and red/green ratio were used in order to identify the ripe tomatoes from background under natural illumination. The average error of threshold recognition, red/green ratio and curvature of the image algorithms were 11.82%, 10.03% and 7.95% in HSI, RGB and YCbCr color spaces, respectively. Therefore, the YCbCr color space and curvature of the image algorithm were identified as the most suitable for recognizing fruits under natural illumination condition.

  15. Computational Biology and the Limits of Shared Vision

    DEFF Research Database (Denmark)

    Carusi, Annamaria

    2011-01-01

    of cases is necessary in order to gain a better perspective on social sharing of practices, and on what other factors this sharing is dependent upon. The article presents the case of currently emerging inter-disciplinary visual practices in the domain of computational biology, where the sharing of visual...... practices would be beneficial to the collaborations necessary for the research. Computational biology includes sub-domains where visual practices are coming to be shared across disciplines, and those where this is not occurring, and where the practices of others are resisted. A significant point......, its domain of study. Social practices alone are not sufficient to account for the shaping of evidence. The philosophy of Merleau-Ponty is introduced as providing an alternative framework for thinking of the complex inter-relations between all of these factors. This [End Page 300] philosophy enables us...

  16. Automatic calibration system of the temperature instrument display based on computer vision measuring

    Science.gov (United States)

    Li, Zhihong; Li, Jinze; Bao, Changchun; Hou, Guifeng; Liu, Chunxia; Cheng, Fang; Xiao, Nianxin

    2010-07-01

    With the development of computers and the techniques of dealing with pictures and computer optical measurement, various measuring techniques are maturing gradually on the basis of optical picture processing technique and using in practice. On the bases, we make use of the many years' experience and social needs in temperature measurement and computer vision measurement to come up with the completely automatic way of the temperature measurement meter with integration of the computer vision measuring technique. It realizes synchronization collection with theory temperature value, improves calibration efficiency. based on least square fitting principle, integrate data procession and the best optimize theory, rapidly and accurately realizes automation acquisition and calibration of temperature.

  17. Computer Use and Vision.Related Problems Among University ...

    African Journals Online (AJOL)

    Conclusion: High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness ...

  18. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...

  19. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  20. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    Introduction. Globally, personal computers were one of the commonest office tools. Almost all institutions, colleges, universities and homes today were using computer regularly. Using computers had become a 21st century necessity.[1] However, their usage, even for 3 h/day, led to a health risk of developing computer ...

  1. [Vision test program for ophthalmologists on Apple II, IIe and IIc computers].

    Science.gov (United States)

    Huber, C

    1985-03-01

    A microcomputer program for the Apple II family of computers on a monochrome and a color screen is described. The program draws most of the tests used by ophthalmologists, and is offered as an alternative to a projector system. One advantage of the electronic generation of drawings is that true random orientation of Pflueger's E is possible. Tests are included for visual acuity (Pflueger's E, Landolt rings, numbers and children's drawings). Colored tests include a duochrome test, simple color vision tests, a fixation help with a musical background, a cobalt blue test and a Worth figure. In the astigmatic dial a mobile pointer helps to determine the axis. New tests can be programmed by the user and exchanged on disks among collageues.

  2. Computer vision-based sorting of Atlantic salmon (Salmo salar) fillets according to their color level.

    Science.gov (United States)

    Misimi, E; Mathiassen, J R; Erikson, U

    2007-01-01

    Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.

  3. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques.

    Science.gov (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad

    2016-01-01

    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  4. Computer vision for automatic inspection of agricultural produce

    Science.gov (United States)

    Molto, Enrique; Blasco, Jose; Benlloch, Jose V.

    1999-01-01

    Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.

  5. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    The research summarised in this PhD thesis took advantage of methods from computer vision to experimentally analyse the sorting/separation ability of a specific type of seed sorting device – known as an “indented cylinder”. The indented cylinder basically separates incoming seeds into two sub......-groups: (1) “long” seeds and (2) “short” seeds (known as length-separation). The motion of seeds being physically manipulated inside an active indented cylinder was analysed using various computer vision methods. The data from such analyses were used to create an overview of the machine’s ability to separate...... all the complexities related to that as well. The project arrived at a number of results of high scientific and practical value to the area of applied computer vision and seed processing and agricultural technology in general. The results and methodologies were summarised in one conference paper...

  6. A computer vision based method for 3D posture estimation of symmetrical lifting.

    Science.gov (United States)

    Mehrizi, Rahil; Peng, Xi; Xu, Xu; Zhang, Shaoting; Metaxas, Dimitris; Li, Kang

    2018-03-01

    Work-related musculoskeletal disorders (WMSD) are commonly observed among the workers involved in material handling tasks such as lifting. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks. Such an assessment has been mainly conducted using surface marker-based methods, which is time consuming and tedious. During the past decade, computer vision based pose estimation techniques have gained an increasing interest and may be a viable alternative for surface marker-based human movement analysis. The aim of this study is to develop and validate a computer vision based marker-less motion capture method to assess 3D joint kinematics of lifting tasks. Twelve subjects performing three types of symmetrical lifting tasks were filmed from two views using optical cameras. The joints kinematics were calculated by the proposed computer vision based motion capture method as well as a surface marker-based motion capture method. The joint kinematics estimated from the computer vision based method were practically comparable to the joint kinematics obtained by the surface marker-based method. The mean and standard deviation of the difference between the joint angles estimated by the computer vision based method and these obtained by the surface marker-based method was 2.31 ± 4.00°. One potential application of the proposed computer vision based marker-less method is to noninvasively assess 3D joint kinematics of industrial tasks such as lifting. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Development of a wireless computer vision instrument to detect biotic stress in wheat.

    Science.gov (United States)

    Casanova, Joaquin J; O'Shaughnessy, Susan A; Evett, Steven R; Rush, Charles M

    2014-09-23

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  8. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  9. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision.

    Science.gov (United States)

    Warren, William H

    2012-01-01

    David Marr's book Vision attempted to formulate athoroughgoing formal theory of perception. Marr borrowed much of the "computational" level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information are prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet, as a research program leading to a computational model of human vision, Marr's program did not succeed. This article asks why, using the perception of 3D shape as a morality tale. Marr presumed that the goal of vision is to recover a general-purpose Euclidean description of the world, which can be deployed for any task or action. On this formulation, vision is underdetermined by information, which in turn necessitates auxiliary assumptions to solve the problem. But Marr's assumptions did not actually reflect natural constraints, and consequently the solutions were not robust. We now know that humans do not in fact recover Euclidean structure--rather, they reliably perceive qualitative shape (hills, dales, courses, ridges), which is specified by the second-order differential structure of images. By recasting the goals of vision in terms of our perceptual competencies, and doing the hard work of analyzing the information available under ecological constraints, we can reformulate the problem so that perception is determined by information and prior knowledge is unnecessary.

  10. An innovative road marking quality assessment mechanism using computer vision

    Directory of Open Access Journals (Sweden)

    Kuo-Liang Lin

    2016-06-01

    Full Text Available Aesthetic quality acceptance for road marking works has been relied on subjective visual examination. Due to a lack of quantitative operation procedures, acceptance outcome can be biased and results in great quality variation. To improve aesthetic quality acceptance procedure of road marking, we develop an innovative road marking quality assessment mechanism, utilizing machine vision technologies. Using edge smoothness as a quantitative aesthetic indicator, the proposed prototype system first receives digital images of finished road marking surface and has the images processed and analyzed to capture the geometric characteristics of the marking. The geometric characteristics are then evaluated to determine the quality level of the finished work. System is demonstrated through two real cases to show how it works. In the end, a test comparing the assessment results between the proposed system and expert inspection is conducted to enhance the accountability of the proposed mechanism.

  11. A Novel Solar Tracker Based on Omnidirectional Computer Vision

    Directory of Open Access Journals (Sweden)

    Zakaria El Kadmiri

    2015-01-01

    Full Text Available This paper presents a novel solar tracker system based on omnidirectional vision technology. The analysis of acquired images with a catadioptric camera allows extracting accurate information about the sun position toward both elevation and azimuth. The main advantages of this system are its wide field of tracking of 360° horizontally and 200° vertically. The system has the ability to track the sun in real time independently of the spatiotemporal coordinates of the site. The extracted information is used to control the two DC motors of the dual-axis mechanism to achieve the optimal orientation of the photovoltaic panels with the aim of increasing the power generation. Several experimental studies have been conducted and the obtained results confirm the power generation efficiency of the proposed solar tracker.

  12. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    on steep sloped surfaces. Also, a novel adaption of a well known graph cut based disparity estimation algorithm with trinocular vision was developed and tested. The results were successful and allowed for better disparity estimations on steep sloped surfaces. After finding the disparity maps each...... dense ground truth of disparities. Therefore, a test framework was developed based on ray tracing. The goal was to analyze existing methods for disparity map generation. The major problem for existing methods was the steepness of the leaves relative to the closeness of overlapping leaves. Both sum......-of-squared difference methods and energy-minimizing methods had this problem. Following the test a series of disparity estimation techniques were developed and tested in the test framework using a set of ray traced images and a hand-annotated set of real plants with similar plant shapes. Novel similarity measures were...

  13. Image-plane processing for improved computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  14. Development of PHilMech Computer Vision System (CVS) for Quality Analysis of Rice and Corn

    OpenAIRE

    Andres Morales Tuates jr; Aileen R. Ligisan

    2016-01-01

    Manual analysis of rice and corn is done by visually inspecting each grain and classifying according to their respective categories.  This method is subjective and tedious leading to errors in analysis.  Computer vision could be used to analyze quality of rice and corn by developing models that correlate shape and color features with various classification. The PhilMech low-cost computer vision system (CVS) was developed to analyze the quality of rice and corn.  It is composed of an ordinary ...

  15. Big data computing: Building a vision for ARS information management

    Science.gov (United States)

    Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...

  16. Human vision combines oriented filters to compute edges.

    Science.gov (United States)

    Georgeson, M A

    1992-09-22

    The experiments examined the perceived spatial structure of plaid patterns, composed of two or three sinusoidal gratings of the same spatial frequency, superimposed at different orientations. Perceived structure corresponded well with the pattern of zero crossings in the output of a circular spatial filter applied to the image. This lends some support to Marr & Hildreth's (Proc. R. Soc. Lond. B 207, 187 (1980)) theory of edge detection as a model for human vision, but with a very different implementation. The perceived structure of two-component plaids was distorted by prior exposure to a masking or adapting grating, in a way that was perceptually equivalent to reducing the contrast of one of the plaid components. This was confirmed by finding that the plaid distortion could be nulled by increasing the contrast of the masked or adapted component. A corresponding reduction of perceived contrast for single gratings was observed after adaptation and in some masking conditions. I propose the outlines of a model for edge finding in human vision. The plaid components are processed through cortical, orientation-selective filters that are subject to attenuation by forward masking and adaptation. The outputs of these oriented filters are then linearly summed to emulate circular filtering, and zero crossings (zcs) in the combined output are used to determine edge locations. Masking or adapting to a grating attenuates some oriented filters more than others, and although this changes only the effective contrast of the components, it results in a geometric distortion at the zc level after different filters have been combined. The orientation of zcs may not correspond at all with the orientation of Fourier components, but they are correctly predicted by this two-stage model. The oriented filters are not 'orientation detectors', but are precursors to a more subtle stage that locates and represents spatial features.

  17. A computer implementation of a theory of human stereo vision.

    Science.gov (United States)

    Grimson, W E

    1981-05-12

    Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

  18. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    Science.gov (United States)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  19. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  20. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  1. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    Science.gov (United States)

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    Directory of Open Access Journals (Sweden)

    Joaquin J. Casanova

    2014-09-01

    Full Text Available Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM. In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV, vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32 than stressed wheat (111.34. In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014, as did the conventional camera (p < 0.0001. Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  3. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  4. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  5. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  6. Former food products safety: microbiological quality and computer vision evaluation of packaging remnants contamination.

    Science.gov (United States)

    Tretola, M; Di Rosa, A R; Tirloni, E; Ottoboni, M; Giromini, C; Leone, F; Bernardi, C E M; Dell'Orto, V; Chiofalo, V; Pinotti, L

    2017-08-01

    The use of alternative feed ingredients in farm animal's diets can be an interesting choice from several standpoints, including safety. In this respect, this study investigated the safety features of selected former food products (FFPs) intended for animal nutrition produced in the framework of the IZS PLV 06/14 RC project by an FFP processing plant. Six FFP samples, both mash and pelleted, were analysed for the enumeration of total viable count (TVC) (ISO 4833), Enterobacteriaceae (ISO 21528-1), Escherichia coli (ISO 16649-1), coagulase-positive Staphylococci (CPS) (ISO 6888), presumptive Bacillus cereus and its spores (ISO 7932), sulphite-reducing Clostridia (ISO 7937), yeasts and moulds (ISO 21527-1), and the presence in 25 g of Salmonella spp. (ISO 6579). On the same samples, the presence of undesired ingredients, which can be identified as remnants of packaging materials, was evaluated by two different methods: stereomicroscopy according to published methods; and stereomicroscopy coupled with a computer vision system (IRIS Visual Analyzer VA400). All FFPs analysed were safe from a microbiological point of view. TVC was limited and Salmonella was always absent. When remnants of packaging materials were considered, the contamination level was below 0.08% (w/w). Of note, packaging remnants were found mainly from the 1-mm sieve mesh fractions. Finally, the innovative computer vision system demonstrated the possibility of rapid detection for the presence of packaging remnants in FFPs when combined with a stereomicroscope. In conclusion, the FFPs analysed in the present study can be considered safe, even though some improvements in FFP processing in the feeding plant can be useful in further reducing their microbial loads and impurity.

  7. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    Science.gov (United States)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  8. Computer vision and augmented reality in gastrointestinal endoscopy

    Science.gov (United States)

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.

    2015-01-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175

  9. Former Food Products Safety Evaluation: Computer Vision as an Innovative Approach for the Packaging Remnants Detection

    Directory of Open Access Journals (Sweden)

    Marco Tretola

    2017-01-01

    Full Text Available Former food products (FFPs represent a way by which leftovers from the food industry (e.g., biscuits, bread, breakfast cereals, chocolate bars, pasta, savoury snacks, and sweets are converted into ingredients for the feed industry, thereby keeping food losses in the food chain. FFPs represent an alternative source of nutrients for animal feeding. However, beyond their nutritional value, the use of FFPs in animal feeding implies also safety issues, such as those related to the presence of packaging remnants. These contaminants might reside in FFP during food processing (e.g., collection, unpacking, mixing, grinding, and drying. Nowadays, artificial senses are widely used for the detection of foreign material in food and all of them involve computer vision. Computer vision technique provides detailed pixel-based characterizations of colours spectrum of food products, suitable for quality evaluation. The application of computer vision for a rapid qualitative screening of FFP’s safety features, in particular for the detection of packaging remnants, has been recently tested. This paper presents the basic principles, the advantages, and disadvantages of the computer vision method with an evaluation of its potential in the detection of packaging remnants in FFP.

  10. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  11. Bus Automata For Intelligent Robots And Computer Vision

    Science.gov (United States)

    Rothstein, Jerome

    1988-02-01

    Bus automata (BA's) are arrays of automata, each controlling a module of a global interconnection network, an automaton and its module constituting a cell. Connecting modules permits cells to become effectively nearest neighbors even when widely separated. This facilitates parallelism in computation far in excess of that allowed by the "bucket-brigade" communication bottleneck of traditional cellular automata (CA's). Distributed information storage via local automaton states permits complex parallel data processing for rapid pattern recognition, language parsing and other distributed computation at systolic array rates. Global BA architecture can be entirely changed in the time to make one cell state transition. The BA is thus a neural model (cells correspond to neurons) with network plasticity attractive for brain models. Planar (chip) BA's admitting optical input (phototransistors) become powerful retinal models. The distributed input pattern is optically fed directly to distributed local memory, ready for distributed processing, both "retinally" and cooperatively with other BA chips ("brain"). This composite BA can compute control signals for output organs, and sensory inputs other than visual can be utilized similarly. In the BA retina is essentially brain, as in mammals (retina and brain are embryologically the same). The BA can also model opto-motor response (frogs, insects) or sonar response (dolphins, bats), and is proposed as the model of choice for the brains of future intelligent robots and for computer eyes with local parallel image processing capability. Multidimensional formal languages are introduced, corresponding to BA's and patterns the way generative grammars correspond to sequential machines, and applied to fractals and their recognition by BA's.

  12. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  13. Fluid flow simulations meet high-speed video: Computer vision comparison of droplet dynamics.

    Science.gov (United States)

    Kulju, S; Riegger, L; Koltay, P; Mattila, K; Hyväluoma, J

    2018-03-16

    While multiphase flows, particularly droplet dynamics, are ordinary in nature as well as in industrial processes, their mathematical and computational modelling continue to pose challenging research tasks - patent approaches for tackling them are yet to be found. The lack of analytical flow field solutions for non-trivial droplet dynamics hinders validation of computer simulations and, hence, their application in research problems. High-speed videos and computer vision algorithms can provide a viable approach to validate simulations directly against experiments. Droplets of water (or glycerol-water mixtures) impacting on both hydrophobic and superhydrophobic surfaces were imaged with a high-speed camera. The corresponding configurations were simulated using a lattice-Boltzmann multiphase scheme. Video frames from experiments and simulations were compared, by means of computer vision, over entire droplet impact events. The proposed experimental validation procedure provides a detailed, dynamic one-on-one comparison of a droplet impact. The procedure relies on high-speed video recording of the experiments, computer vision, and on a software package for the analyzation routines. The procedure is able to quantitatively validate computer simulations against experiments and it is widely applicable to multiphase flow systems in general. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Computer vision for shoe upper profile measurement via upper and sole conformal matching

    Science.gov (United States)

    Hu, Zhongxu; Bicker, Robert; Taylor, Paul; Marshall, Chris

    2007-01-01

    This paper describes a structured light computer vision system applied to the measurement of the 3D profile of shoe uppers. The trajectory obtained is used to guide an industrial robot for automatic edge roughing around the contour of the shoe upper so that the bonding strength can be improved. Due to the specific contour and unevenness of the shoe upper, even if the 3D profile is obtained using computer vision, it is still difficult to reliably define the roughing path around the shape. However, the shape of the corresponding shoe sole is better defined, and it is much easier to measure the edge using computer vision. Therefore, a feasible strategy is to measure both the upper and sole profiles, and then align and fit the sole contour to the upper, in order to obtain the best fit. The trajectory of the edge of the desired roughing path is calculated and is then smoothed and interpolated using NURBS curves to guide an industrial robot for shoe upper surface removal; experiments show robust and consistent results. An outline description of the structured light vision system is given here, along with the calibration techniques used.

  15. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  16. Sigma: computer vision in the service of safety and reliability in the inspection services

    International Nuclear Information System (INIS)

    Pineiro, P. J.; Mendez, M.; Garcia, A.; Cabrera, E.; Regidor, J. J.

    2012-01-01

    Vision Computing is growing very fast in the last decade with very efficient tools and algorithms. This allows new development of applications in the nuclear field providing more efficient equipment and tasks: redundant systems, vision-guided mobile robots, automated visual defects recognition, measurement, etc., In this paper Tecnatom describes a detailed example of visual computing application developed to provide secure redundant identification of the thousands of tubes existing in a power plant steam generator. some other on-going or planned visual computing projects by Tecnatom are also introduced. New possibilities of application in the inspection systems for nuclear components appear where the main objective is to maximize their reliability. (Author) 6 refs.

  17. Direct methods for Poisson problems in low-level computer vision

    Science.gov (United States)

    Chhabra, Atul K.; Grogan, Timothy A.

    1990-09-01

    Several problems in low-level computer vision can be mathematically formulated as linear elliptic partial differential equations of the second order. A subset of these problems can be expressed in the form of a Poisson equation, Lu(x, y) = f(x, y). In this paper, fast direct methods for solving the Poisson equations of computer vision are developed. Until recently, iterative methods were used to solve these equations. Recently, direct Fourier techniques were suggested to speed up the computation. We present the Fourier Analysis and Cyclic Reduction (FACR) method which is faster than the Fourier method or the Cyclic Reduction method alone. For computation on an n x n grid, the operation count for the Fourier method is O(n2log2n), and that for the FACR method is O(n2log2log2n). The FACR method first reduces the system of equations into a smaller set using Cyclic Reduction. Next, the reduced system is solved by the Fourier method. The final solution is obtained by back-substituting the solution of the reduced system. With Neumann boundary conditions, a Poisson equation does not have a unique solution. We show how a physically meaningful solution can be obtained under such circumstances. Application of the FACR and other methods is discussed for two problems of low-level computer vision - lightness, or reflectance from brightness, and recovering height from surface gradient.

  18. Computer vision for real-time orbital operations. Center directors discretionary fund

    Science.gov (United States)

    Vinz, F. L.; Brewster, L. L.; Thomas, L. D.

    1984-01-01

    Machine vision research is examined as it relates to the NASA Space Station program and its associated Orbital Maneuvering Vehicle (OMV). Initial operation of OMV for orbital assembly, docking, and servicing are manually controlled from the ground by means of an on board TV camera. These orbital operations may be accomplished autonomously by machine vision techniques which use the TV camera as a sensing device. Classical machine vision techniques are described. An alternate method is developed and described which employs a syntactic pattern recognition scheme. It has the potential for substantial reduction of computing and data storage requirements in comparison to the Two-Dimensional Fast Fourier Transform (2D FFT) image analysis. The method embodies powerful heuristic pattern recognition capability by identifying image shapes such as elongation, symmetry, number of appendages, and the relative length of appendages.

  19. Computational simulation to understand vision changes during prolonged weightlessness.

    Science.gov (United States)

    Rose, William C

    2013-01-01

    A mathematical model of whole body and cerebral hemodynamics is a useful tool for investigating visual impairment and intracranial pressure (VIIP), a recently described condition associated with space flight. VIIP involves loss of visual acuity, anatomical changes to the eye, and, usually, elevated cerebrospinal fluid pressure. Loss of visual acuity is a significant threat to astronaut health and performance. It is therefore important to understand the pathogenesis of VIIP. Some of the experimental measurements that could lead to better understanding of the pathophysiology are impossible or infeasible on orbit. A computational implementation of a mathematical model of hypothetical pathophysiological processes is therefore valuable. Such a model is developed, and is used to investigate how changes in vascular compliance or pressure can influence intraocular or intracranial pressure.

  20. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  1. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    Science.gov (United States)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  2. Application of computer vision in studying fire plume behavior of tilting flames

    Science.gov (United States)

    Aminfar, Amirhessam; Cobian Iñiguez, Jeanette; Pham, Stephanie; Chong, Joey; Burke, Gloria; Weise, David; Princevac, Marko

    2016-11-01

    With the development in computer sciences especially in the field of computer vision, image processing has become an inevitable part of flow visualization. Computer vision can be used to visualize flow structure and to quantify its properties. We used a computer vision algorithm to study fire plume tilting when the fire is interacting with a solid wall. As the fire propagates to the wall the amount of air available for the fire to consume will decrease on the wall side. Therefore, the fire will start tilting towards the wall. Aspen wood was used for the fuel source and various configurations of the fuel were investigated. The plume behavior was captured using a digital camera. In the post processing, the flames were isolated from the image by using edge detection technics, making it possible to develop an algorithm to calculate flame height and flame orientation. Moreover, by using an optical flow algorithm we were able to calculate the speed associated with the edges of the flame which is related to the flame propagation speed and effective vertical velocity of the flame. The results demonstrated that as the size of the flame was increasing, the flames started tilting towards the wall. Leading to the conclusion that there should be a critical area of fire in which the flames start to tilt. Also, the algorithm made it possible to calculate a critical distance in which the flame will start orienting towards the wall

  3. Comparing visual representations across human fMRI and computational vision.

    Science.gov (United States)

    Leeds, Daniel D; Seibert, Darren A; Pyles, John A; Tarr, Michael J

    2013-11-22

    Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from "interest points," was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation.

  4. Computer vision and sensor fusion for detecting buried objects

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Hernandez, J.E.; Sengupta, S.K.; Sherwood, R.J.; Schaich, P.C.; Buhl, M.R.; Kane, R.J.; DelGrande, N.K.

    1992-10-01

    Given multiple images of the surface of the earth from dual-band infrared sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. Supervised learning pattern classifiers (including neural networks,) are used. We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing information from multiple sensor types. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved problem of detecting buried land mines from an airborne standoff platform.

  5. Computer vision syndrome: A study of the knowledge, attitudes and practices in Indian Ophthalmologists

    Directory of Open Access Journals (Sweden)

    Bali Jatinder

    2007-01-01

    Full Text Available Purpose: To study the knowledge, attitude and practices (KAP towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether ′computer use by practitioners′ had any bearing on the knowledge and practices in computer vision syndrome (CVS. Materials and Methods: A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. Results: All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%, headache (82.1%, tiredness and burning sensation (79.1%, watering (66.4% and redness (61.2%. Ophthalmologists using computers reported that focusing from distance to near and vice versa ( P =0.006, χ2 test, blurred vision at a distance ( P =0.016, χ2 test and blepharospasm ( P =0.026, χ2 test formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7% were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7% or spectral filters. Computer-users were more likely to prescribe sedatives/ anxiolytics ( P = 0.04, χ2 test, spectacles ( P = 0.02, χ2 test and conscious frequent blinking ( P = 0.003, χ2 test than the non-computer-users. Conclusions: All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  6. Quality Inspection and Grading of Canned Green Peas using Computer Vision

    OpenAIRE

    Jinesh V N

    2015-01-01

    Canned Green Peas are widely used vegetable and is the preferred food during emergency food supply in natural disaster for victims. It is highly nutritive and is rich in protein. The quality of these Canned Green Peas is determined by its color, smell and shape. A computer vision system is used to inspect the quality of peas. The sample for the experiment was acquired from proposed image acquisition system with image resolution 400X300. The proposed system facilitates the color and dimension ...

  7. Preliminary Design of a Recognition System for Infected Fish Species Using Computer Vision

    OpenAIRE

    Hu, Jing; Li, Daoliang; Duan, Qingling; Chen, Guifen; Si, Xiuli

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; For the purpose of classification of fish species, a recognition system was preliminary designed using computer vision. In the first place, pictures were pre-processed by developed programs, dividing into rectangle pieces. Secondly, color and texture features are extracted for those selected texture rectangle fish skin images. Finally, all the images were classified by multi...

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  9. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  10. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Directory of Open Access Journals (Sweden)

    Humza J Tahir

    Full Text Available Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  11. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Science.gov (United States)

    Tahir, Humza J; Murray, Ian J; Parry, Neil R A; Aslam, Tariq M

    2014-01-01

    Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  12. Computer Vision

    Science.gov (United States)

    1982-04-01

    1970, allows edges to be located (at maxima of the surface •7 "gradient, e.g.) to subpixel accuracy. Another important idea, first proposed by Hueckel...define. In three dimen- sions, the problem is rendered even more difficult by the fact that only one side of an object can be visible in an image; the

  13. Computer vision syndrome prevalence, knowledge and associated factors among Saudi Arabia University Students: Is it a serious problem?

    Science.gov (United States)

    Al Rashidi, Sultan H; Alhumaidan, H

    2017-01-01

    Computers and other visual display devices are now an essential part of our daily life. With the increased use, a very large population is experiencing sundry ocular symptoms globally such as dry eyes, eye strain, irritation, and redness of the eyes to name a few. Collectively, all such computer related symptoms are usually referred to as computer vision syndrome (CVS). The current study aims to define the prevalence, knowledge in community, pathophysiology, factors associated, and prevention of CVS. This is a cross-sectional study conducted in Qassim University College of Medicine during a period of 1 year from January 2015 to January 2016 using a questionnaire to collect relevant data including demographics and various variables to be studied. 634 students were inducted from a public sector University of Qassim, Saudi Arabia, regardless of their age and gender. The data were then statistically analyzed on SPSS version 22, and the descriptive data were expressed as percentages, mode, and median using graphs where needed. A total of 634 students with a mean age of 21. 40, Std 1.997 and Range 7 (18-25) were included as study subjects with a male predominance (77.28%). Of the total patients, majority (459, 72%) presented with acute symptoms while remaining had chronic problems. A clear-cut majority was carrying the symptoms for 1 month. The statistical analysis revealed serious symptoms in the majority of study subjects especially those who are permanent users of a computer for long hours. Continuous use of computers for long hours is found to have severe problems of vision especially in those who are using computers and similar devices for a long duration.

  14. In-line 3D print failure detection using computer vision

    DEFF Research Database (Denmark)

    Lyngby, Rasmus Ahrenkiel; Wilm, Jakob; Eiríksson, Eyþór Rúnar

    2017-01-01

    Here we present our findings on a novel real-time vision system that allows for automatic detection of failure conditions that are considered outside of nominal operation. These failure modes include warping, build plate delamination and extrusion failure. Our system consists of a calibrated camera...

  15. New approach for extracting depth of structure crack by Computer Vision

    International Nuclear Information System (INIS)

    Youm, Min Kyo; Min, Byung Il; Park, Ki Hyun; Suh, Kyung Suk

    2014-01-01

    Nuclear disasters generate not only primary accidents by explosions but also secondary accidents by radiation In this paper, we developed an automatic crack detection system using computer vision processing in nuclear structure. Through this, we can conduct not only usual safety management, but also the checking direction and depth of crack using robot vision in a human district area. In this study, we are developing an automatic extraction system of a structure crack and verified it through testing. First, we can calculate the depth of a crack through a crack detection system using image processing. Second, the result of bad condition to evaluate performance of algorithm, we can improve the performance using a filtering method. However, in actual situation, we need a high resolution image. Through this study, we conducted crack extraction using a computer vision programming. In addition, we improve the performance using a filtering method. However, this will not be smoothly extracted in case that crack is not clear in structure or lines exist which is similar to crack. We therefore need more studies on the above-mentioned problems

  16. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    International Nuclear Information System (INIS)

    Moore, Kevin L.; Moiseenko, Vitali; Kagadis, George C.; McNutt, Todd R.; Mutic, Sasa

    2014-01-01

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy

  17. Vision 20/20: Automation and advanced computing in clinical radiation oncology.

    Science.gov (United States)

    Moore, Kevin L; Kagadis, George C; McNutt, Todd R; Moiseenko, Vitali; Mutic, Sasa

    2014-01-01

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  18. Human-computer interface including haptically controlled interactions

    Science.gov (United States)

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  19. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  20. Clinical efficacy of Ayurvedic management in computer vision syndrome: A pilot study.

    Science.gov (United States)

    Dhiman, Kartar Singh; Ahuja, Deepak Kumar; Sharma, Sanjeev Kumar

    2012-07-01

    Improper use of sense organs, violating the moral code of conduct, and the effect of the time are the three basic causative factors behind all the health problems. Computer, the knowledge bank of modern life, has emerged as a profession causing vision-related discomfort, ocular fatigue, and systemic effects. Computer Vision Syndrome (CVS) is the new nomenclature to the visual, ocular, and systemic symptoms arising due to the long time and improper working on the computer and is emerging as a pandemic in the 21(st) century. On critical analysis of the symptoms of CVS on Tridoshika theory of Ayurveda, as per the road map given by Acharya Charaka, it seems to be a Vata-Pittaja ocular cum systemic disease which needs systemic as well as topical treatment approach. Shatavaryaadi Churna (orally), Go-Ghrita Netra Tarpana (topically), and counseling regarding proper working conditions on computer were tried in 30 patients of CVS. In group I, where oral and local treatment was given, significant improvement in all the symptoms of CVS was observed, whereas in groups II and III, local treatment and counseling regarding proper working conditions, respectively, were given and showed insignificant results. The study verified the hypothesis that CVS in Ayurvedic perspective is a Vata-Pittaja disease affecting mainly eyes and body as a whole and needs a systemic intervention rather than topical ocular medication only.

  1. Vision correction for computer users based on image pre-compensation with changing pupil size.

    Science.gov (United States)

    Huang, Jian; Barreto, Armando; Alonso, Miguel; Adjouadi, Malek

    2011-01-01

    Many computer users suffer varying degrees of visual impairment, which hinder their interaction with computers. In contrast with available methods of vision correction (spectacles, contact lenses, LASIK, etc.), this paper proposes a vision correction method for computer users based on image pre-compensation. The blurring caused by visual aberration is counteracted through the pre-compensation performed on images displayed on the computer screen. The pre-compensation model used is based on the visual aberration of the user's eye, which can be measured by a wavefront analyzer. However, the aberration measured is associated with one specific pupil size. If the pupil has a different size during viewing of the pre-compensated images, the pre-compensation model should also be modified to sustain appropriate performance. In order to solve this problem, an adjustment of the wavefront function used for pre-compensation is implemented to match the viewing pupil size. The efficiency of these adjustments is evaluated with an "artificial eye" (high resolution camera). Results indicate that the adjustment used is successful and significantly improves the images perceived and recorded by the artificial eye.

  2. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  3. Pulmonary nodule characterization, including computer analysis and quantitative features.

    Science.gov (United States)

    Bartholmai, Brian J; Koo, Chi Wan; Johnson, Geoffrey B; White, Darin B; Raghunath, Sushravya M; Rajagopalan, Srinivasan; Moynagh, Michael R; Lindell, Rebecca M; Hartman, Thomas E

    2015-03-01

    Pulmonary nodules are commonly detected in computed tomography (CT) chest screening of a high-risk population. The specific visual or quantitative features on CT or other modalities can be used to characterize the likelihood that a nodule is benign or malignant. Visual features on CT such as size, attenuation, location, morphology, edge characteristics, and other distinctive "signs" can be highly suggestive of a specific diagnosis and, in general, be used to determine the probability that a specific nodule is benign or malignant. Change in size, attenuation, and morphology on serial follow-up CT, or features on other modalities such as nuclear medicine studies or MRI, can also contribute to the characterization of lung nodules. Imaging analytics can objectively and reproducibly quantify nodule features on CT, nuclear medicine, and magnetic resonance imaging. Some quantitative techniques show great promise in helping to differentiate benign from malignant lesions or to stratify the risk of aggressive versus indolent neoplasm. In this article, we (1) summarize the visual characteristics, descriptors, and signs that may be helpful in management of nodules identified on screening CT, (2) discuss current quantitative and multimodality techniques that aid in the differentiation of nodules, and (3) highlight the power, pitfalls, and limitations of these various techniques.

  4. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  5. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  6. An artificial-vision responsive to patient motions during computer controlled radiation therapy

    International Nuclear Information System (INIS)

    Kalend, A.M.; Shimoga, K.; Kanade, T.; Greenberger, J.S.

    1997-01-01

    Purpose/Objectives: Automated precision radiotherapy using multiple conformal and modulated beams, requires monitoring of patient movements during irradiation. Immobilizers relying on patient cooperating in cradles have somewhat reduced positional uncertainties, but others including breathing are largely unknown. We built an artificial vision (AV) device for real-time vision of patient movements, their tracking and quantification. Method and Materials: The Artificial Vision System's 'acuity' and 'reflex' were evaluated in terms of imaged skin spatial resolutions and temporal dispersions measured using a mannequin and a fiduciated harmonic oscillator placed at 100cm isocenter. The device traced skin motion even in poorly lighted rooms without use of explicit skin fiduciation, or using standard radiotherapy skin tattoos. Results: The AV system tracked human skin at vision rates approaching 30Hz and sensitivity of 2mm. It successfully identified and tracked independent skin marks, either natural tattoos or artificial fiducials. Three alert levels triggered when patient movement exceeded preset displacements (2mm/30Hz), motion velocities (5m/sec) or acceleration (2m/sec 2 ). Conclusion: The AV system trigger should suit for patient ventilatory gating and safety interlocking of treatment accelerators, in order to modulate, interrupt, or abort radiation during dynamic therapy

  7. Computer Simulation of the Solidification Process Including Air Gap Formation

    Directory of Open Access Journals (Sweden)

    Skrzypczak T.

    2017-12-01

    Full Text Available The paper presents an approach of numerical modelling of alloy solidification in permanent mold and transient heat transport between the casting and the mold in two-dimensional space. The gap of time-dependent width called "air gap", filled with heat conducting gaseous medium is included in the model. The coefficient of thermal conductivity of the gas filling the space between the casting and the mold is small enough to introduce significant thermal resistance into the heat transport process. The mathematical model of heat transport is based on the partial differential equation of heat conduction written independently for the solidifying region and the mold. Appropriate solidification model based on the latent heat of solidification is also included in the mathematical description. These equations are supplemented by appropriate initial and boundary conditions. The formation process of air gap depends on the thermal deformations of the mold and the casting. The numerical model is based on the finite element method (FEM with independent spatial discretization of interacting regions. It results in multi-mesh problem because the considered regions are disconnected.

  8. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    International Nuclear Information System (INIS)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L

    2016-01-01

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  9. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  10. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  11. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  12. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    Science.gov (United States)

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  13. Behavioral response of tilapia (Oreochromis niloticus) to acute ammonia stress monitored by computer vision.

    Science.gov (United States)

    Xu, Jian-yu; Miao, Xiang-wen; Liu, Ying; Cui, Shao-rong

    2005-08-01

    The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision system. The swimming activity and geometrical parameters such as location of the gravity center and distribution of the fish school were calculated continuously. These behavioral parameters of tilapia school responded sensitively to moderate and high UIA concentration. Under high UIA concentration the fish activity showed a significant increase (Pfish behavior under acute stress can provide important information useful in predicting the stress.

  14. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  15. Computer vision-based apple grading for golden delicious apples based on surface features

    Directory of Open Access Journals (Sweden)

    Payman Moallem

    2017-03-01

    Full Text Available In this paper, a computer vision-based algorithm for golden delicious apple grading is proposed which works in six steps. Non-apple pixels as background are firstly removed from input images. Then, stem end is detected by combination of morphological methods and Mahalanobis distant classifier. Calyx region is also detected by applying K-means clustering on the Cb component in YCbCr color space. After that, defects segmentation is achieved using Multi-Layer Perceptron (MLP neural network. In the next step, stem end and calyx regions are removed from defected regions to refine and improve apple grading process. Then, statistical, textural and geometric features from refined defected regions are extracted. Finally, for apple grading, a comparison between performance of Support Vector Machine (SVM, MLP and K-Nearest Neighbor (KNN classifiers is done. Classification is done in two manners which in the first one, an input apple is classified into two categories of healthy and defected. In the second manner, the input apple is classified into three categories of first rank, second rank and rejected ones. In both grading steps, SVM classifier works as the best one with recognition rate of 92.5% and 89.2% for two categories (healthy and defected and three quality categories (first rank, second rank and rejected ones, among 120 different golden delicious apple images, respectively, considering K-folding with K = 5. Moreover, the accuracy of the proposed segmentation algorithms including stem end detection and calyx detection are evaluated for two different apple image databases.

  16. Computer Vision Tools for Low-Cost and Noninvasive Measurement of Autism-Related Behaviors in Infants

    Directory of Open Access Journals (Sweden)

    Jordan Hashemi

    2014-01-01

    Full Text Available The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a child's natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI. In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinician's behavioral observations obtained from real in-clinic assessments.

  17. The computer vision in the service of safety and reliability in steam generators inspection services; La vision computacional al servicio de la seguridad y fiabilidad en los servicios de inspeccion en generadores de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Pineiro Fernandez, P.; Garcia Bueno, A.; Cabrera Jordan, E.

    2012-07-01

    The actual computational vision has matured very quickly in the last ten years by facilitating new developments in various areas of nuclear application allowing to automate and simplify processes and tasks, instead or in collaboration with the people and equipment efficiently. The current computer vision (more appropriate than the artificial vision concept) provides great possibilities of also improving in terms of the reliability and safety of NPPS inspection systems.

  18. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    International Nuclear Information System (INIS)

    Fedkiw, Ronald P.; Sapiro, Guillermo; Shu Chiwang

    2003-01-01

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision

  19. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  20. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei; Bruno, Vincent; Eric, Villedieu

    2016-01-01

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  1. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. Selective cultivation and rapid detection of Staphylococcus aureus by computer vision.

    Science.gov (United States)

    Wang, Yong; Yin, Yongguang; Zhang, Chaonan

    2014-03-01

    In this paper, we developed a selective growth medium and a more rapid detection method based on computer vision for selective isolation and identification of Staphylococcus aureus from foods. The selective medium consisted of tryptic soy broth basal medium, 3 inhibitors (NaCl, K2 TeO3 , and phenethyl alcohol), and 2 accelerators (sodium pyruvate and glycine). After 4 h of selective cultivation, bacterial detection was accomplished using computer vision. The total analysis time was 5 h. Compared to the Baird-Parker plate count method, which requires 4 to 5 d, this new detection method offers great time savings. Moreover, our novel method had a correlation coefficient of greater than 0.998 when compared with the Baird-Parker plate count method. The detection range for S. aureus was 10 to 10(7) CFU/mL. Our new, rapid detection method for microorganisms in foods has great potential for routine food safety control and microbiological detection applications. © 2014 Institute of Food Technologists®

  3. Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress

    Directory of Open Access Journals (Sweden)

    Chunlei Xia

    2018-01-01

    Full Text Available Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented.

  4. Computer vision system for egg volume prediction using backpropagation neural network

    Science.gov (United States)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  5. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    Science.gov (United States)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  6. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    Science.gov (United States)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  7. Application of a computable model of human spatial vision to phase discrimination

    Science.gov (United States)

    Nielsen, K. R. K.; Watson, A. B.; Ahumada, A. J., Jr.

    1985-01-01

    A computable model of human spatial vision is used to make predictions for phase-discrimination experiments. This model is being developed to deal with a broad range of problems in vision and was not specifically formulated to deal with phase discrimination. In the model, cross-correlation of the stimuli with an array of sensors produces feature vectors that are operated on by a position-uncertain ideal observer to simulate detection and discrimination experiments. In this report, the stimuli are compound sinusoidal gratings composed of a fundamental and a higher-frequency component added in various phases. Model predictions are compared with three key results from the literature: (1) the effect of the contrast of the fundamental on phase discrimination, (2) threshold phase difference as a function of the fundamental frequency, and (3) the contrast required for phase discrimination as a function of the frequency ratio of the two grating components. In the first two cases, the predictions capture the main features of the data, although quantitative discrepancies remain. In the third case, the model fails, and this failure suggests additional restrictions on the combination of information across sensors.

  8. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  9. Displacement measurement of the compliant positioning stage based on a computer micro-vision method

    Directory of Open Access Journals (Sweden)

    Heng Wu

    2016-02-01

    Full Text Available We propose a practical computer micro-vision-based method for displacement measurements of the compliant positioning stage. The algorithm of the proposed method is based on a template matching approach composed of an integer-pixel search and a sub-pixel search. By combining with an optical microscopy, a high resolution CCD camera and the proposed algorithm, an extremely high measuring precision is achieved. Various simulations and experiments are conducted. The simulation results demonstrate that the matching precision can reach to 0.01 pixel when the noise interference is low. A laser interferometer measurement system (LIMS is established for comparison. The experimental results indicate that the proposed method possesses the same performance as the LIMS but exhibits a greater flexibility and operability. The measuring precision can theoretically attain to 2.83 nm/pixel.

  10. Gait Analysis Using Computer Vision Based on Cloud Platform and Mobile Device

    Directory of Open Access Journals (Sweden)

    Mario Nieto-Hidalgo

    2018-01-01

    Full Text Available Frailty and senility are syndromes that affect elderly people. The ageing process involves a decay of cognitive and motor functions which often produce an impact on the quality of life of elderly people. Some studies have linked this deterioration of cognitive and motor function to gait patterns. Thus, gait analysis can be a powerful tool to assess frailty and senility syndromes. In this paper, we propose a vision-based gait analysis approach performed on a smartphone with cloud computing assistance. Gait sequences recorded by a smartphone camera are processed by the smartphone itself to obtain spatiotemporal features. These features are uploaded onto the cloud in order to analyse and compare them to a stored database to render a diagnostic. The feature extraction method presented can work with both frontal and sagittal gait sequences although the sagittal view provides a better classification since an accuracy of 95% can be obtained.

  11. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    all the complexities related to that as well. The project arrived at a number of results of high scientific and practical value to the area of applied computer vision and seed processing and agricultural technology in general. The results and methodologies were summarised in one conference paper...... certain species of seed from each other. Seeds are processed in order to achieve a high-quality end product: a batch of a single species of crop seed. Naturally, farmers need processed clean crop seeds that are free from non-seed impurities, weed seeds, and non-viable or dead crop seeds. Since...... the processing is based on physical manipulation of the seeds themselves, their individual shape and size becomes very relevant. The problem of modelling such physical parameters for various species of seed, grown under various environmental circumstances, is a very complex one. The general problem of modelling...

  12. Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Müller, Matthias

    2018-03-24

    We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  13. UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Mueller, Matthias

    2017-08-19

    We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  14. A method of detection to the grinding wheel layer thickness based on computer vision

    Science.gov (United States)

    Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong

    2018-01-01

    This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.

  15. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  16. Nonstationary color tracking for vision-based human-computer interaction.

    Science.gov (United States)

    Wu, Ying; Huang, T S

    2002-01-01

    Skin color offers a strong cue for efficient localization and tracking of human body parts in video sequences for vision-based human-computer interaction. Color-based target localization could be achieved by analyzing segmented skin color regions. However, one of the challenges of color-based target tracking is that color distributions would change in different lighting conditions such that fixed color models would be inadequate to capture nonstationary color distributions over time. Meanwhile, using a fixed skin color model trained by the data of a specific person would probably not work well for other people. Although some work has been done on adaptive color models, this problem still needs further studies. We present our investigation of color-based image segmentation and nonstationary color-based target tracking, by studying two different representations for color distributions. We propose the structure adaptive self-organizing map (SASOM) neural network that serves as a new color model. Our experiments show that such a representation is powerful for efficient image segmentation. Then, we formulate the nonstationary color tracking problem as a model transduction problem, the solution of which offers a way to adapt and transduce color classifiers in nonstationary color distributions. To fulfill model transduction, we propose two algorithms, the SASOM transduction and the discriminant expectation-maximization (EM), based on the SASOM color model and the Gaussian mixture color model, respectively. Our extensive experiments on the task of real-time face/hand localization show that these two algorithms can successfully handle some difficulties in nonstationary color tracking. We also implemented a real-time face/hand localization system based on such algorithms for vision-based human-computer interaction.

  17. Application of Computer Vision for quality control in frozen mixed berries production: colour calibration issues

    Directory of Open Access Journals (Sweden)

    D. Ricauda Aimonino

    2013-09-01

    Full Text Available Computer vision is becoming increasingly important in quality control of many food processes. The appearance properties of food products (colour, texture, shape and size are, in fact, correlated with organoleptic characteristics and/or the presence of defects. Quality control based on image processing eliminates the subjectivity of human visual inspection, allowing rapid and non-destructive analysis. However, most food matrices show a wide variability in appearance features, therefore robust and customized image elaboration algorithms have to be implemented for each specific product. For this reason, quality control by visual inspection is still rather diffused in several food processes. The case study inspiring this paper concerns the production of frozen mixed berries. Once frozen, different kinds of berries are mixed together, in different amounts, according to a recipe. The correct quantity of each kind of fruit, within a certain tolerance, has to be ensured by producers. Quality control relies on bringing few samples for each production lot (samples of the same weight and, manually, counting the amount of each species. This operation is tedious, subject to errors, and time consuming, while a computer vision system (CVS could determine the amount of each kind of berries in a few seconds. This paper discusses the problem of colour calibration of the CVS used for frozen berries mixture evaluation. Images are acquired by a digital camera coupled with a dome lighting system, which gives a homogeneous illumination on the entire visible surface of the berries, and a flat bed scanner. RBG device dependent data are then mapped onto CIELab colorimetric colour space using different transformation operators. The obtained results show that the proposed calibration procedure leads to colour discrepancies comparable or even below the human eyes sensibility.

  18. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; Ohara, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  19. Principle for the Validation of a Driving Support using a Computer Vision-Based Driver Modelization on a Simulator

    Directory of Open Access Journals (Sweden)

    Baptiste Rouzier

    2015-07-01

    Full Text Available This paper presents a new structure for a driving support designed to compensate for the problems caused by the behaviour of the driver without causing a feeling of unease. This assistance is based on a shared control between the human and an automatic support that computes and applies an assisting torque on the steering wheel. This torque is computed from a representation of the hazards encountered on the road by virtual potentials. However, the equilibrium between the relative influences of the human and the support on the steering wheel are difficult to find and depend upon the situation. This is why this driving support includes a modelization of the driver based on an analysis of several face features using a computer vision algorithm. The goal is to determine whether the driver is drowsy or whether he is paying attention to some specific points in order to adapt the strength of the support. The accuracy of the measurements made on the face features is estimated, and the interest of the proposal as well as the concepts raised by such assistance are studied through simulations.

  20. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    International Nuclear Information System (INIS)

    MacDougall, R.D.; Scherrer, B; Don, S

    2016-01-01

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient. The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund

  1. Ensemble of different local descriptors, codebook generation methods and subwindow configurations for building a reliable computer vision system

    Directory of Open Access Journals (Sweden)

    Loris Nanni

    2014-04-01

    The MATLAB code of our system will be publicly available at http://www.dei.unipd.it/wdyn/?IDsezione=3314&IDgruppo_pass=124&preview=. Our free MATLAB toolbox can be used to verify the results of our system. We also hope that our toolbox will serve as the foundation for further explorations by other researchers in the computer vision field.

  2. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    Science.gov (United States)

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  3. Comparability of the performance of in-line computer vision for geometrical verification of parts, produced by Additive Manufacturing

    DEFF Research Database (Denmark)

    Pedersen, David B.; Hansen, Hans N.

    2014-01-01

    -customized parts with narrow geometrical tolerances require individual verification whereas many hyper-complex parts simply cannot be measured by traditional means such as by optical or mechanical measurement tools. This paper address the challenge by detailing how in-line computer vision has been employed...

  4. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030

    Science.gov (United States)

    Slotnick, Jeffrey P.; Khodadoust, Abdollah; Alonso, Juan J.; Darmofal, David L.; Gropp, William D.; Lurie, Elizabeth A.; Mavriplis, Dimitri J.; Venkatakrishnan, Venkat

    2014-01-01

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be ‘cleaner’ and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. PMID:25024413

  5. Classification of fruits using computer vision and a multiclass support vector machine.

    Science.gov (United States)

    Zhang, Yudong; Wu, Lenan

    2012-01-01

    Automatic classification of fruits via computer vision is still a complicated task due to the various properties of numerous types of fruits. We propose a novel classification method based on a multi-class kernel support vector machine (kSVM) with the desirable goal of accurate and fast classification of fruits. First, fruit images were acquired by a digital camera, and then the background of each image was removed by a split-and-merge algorithm; Second, the color histogram, texture and shape features of each fruit image were extracted to compose a feature space; Third, principal component analysis (PCA) was used to reduce the dimensions of feature space; Finally, three kinds of multi-class SVMs were constructed, i.e., Winner-Takes-All SVM, Max-Wins-Voting SVM, and Directed Acyclic Graph SVM. Meanwhile, three kinds of kernels were chosen, i.e., linear kernel, Homogeneous Polynomial kernel, and Gaussian Radial Basis kernel; finally, the SVMs were trained using 5-fold stratified cross validation with the reduced feature vectors as input. The experimental results demonstrated that the Max-Wins-Voting SVM with Gaussian Radial Basis kernel achieves the best classification accuracy of 88.2%. For computation time, the Directed Acyclic Graph SVMs performs swiftest.

  6. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  7. On-chip imaging of Schistosoma haematobium eggs in urine for diagnosis by computer vision.

    Directory of Open Access Journals (Sweden)

    Ewert Linder

    Full Text Available BACKGROUND: Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive "mini-microscopes". Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. METHODS/PRINCIPAL FINDINGS: Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. CONCLUSIONS/SIGNIFICANCE: As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases.

  8. Automatic sex detection of individuals of Ceratitis capitata by means of computer vision in a biofactory.

    Science.gov (United States)

    Blasco, Jose; Gómez-Sanchís, Juan; Gutierrez, Abelardo; Chueca, Patricia; Argilés, Rafael; Moltó, Enrique

    2009-01-01

    The sterile insect technique (SIT) is acknowledged around the world as an effective method for biological pest control of Ceratitis capitata (Wiedemann). Sterile insects are produced in biofactories where one key issue is the selection of the progenitors that have to transmit specific genetic characteristics. Recombinant individuals must be removed as this colony is renewed. Nowadays, this task is performed manually, in a process that is extremely slow, painstaking and labour intensive, in which the sex of individuals must be identified. The paper explores the possibility of using vision sensors and pattern recognition algorithms for automated detection of recombinants. An automatic system is proposed and tested to inspect individual specimens of C. capitata using machine vision. It includes a backlighting system and image processing algorithms for determining the sex of live flies in five high-resolution images of each insect. The system is capable of identifying the sex of the flies by means of a program that analyses the contour of the abdomen, using fast Fourier transform features, to detect the presence of the ovipositor. Moreover, it can find the characteristic spatulate setae of males. Simulation tests with 1000 insects (5000 images) had 100% success in identifying male flies, with an error rate of 0.6% for female flies. This work establishes the basis for building a machine for the automatic detection and removal of recombinant individuals in the selection of progenitors for biofactories, which would have huge benefits for SIT around the globe.

  9. Television, computer and portable display device use by people with central vision impairment

    Science.gov (United States)

    Woods, Russell L; Satgunam, PremNandhini

    2011-01-01

    Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants

  10. The use of in vivo, ex vivo, in vitro, computational models and volunteer studies in vision research and therapy, and their contribution to the Three Rs.

    Science.gov (United States)

    Combes, Robert D; Shah, Atul B

    2016-07-01

    Much is known about mammalian vision, and considerable progress has been achieved in treating many vision disorders, especially those due to changes in the eye, by using various therapeutic methods, including stem cell and gene therapy. While cells and tissues from the main parts of the eye and the visual cortex (VC) can be maintained in culture, and many computer models exist, the current non-animal approaches are severely limiting in the study of visual perception and retinotopic imaging. Some of the early studies with cats and non-human primates (NHPs) are controversial for animal welfare reasons and are of questionable clinical relevance, particularly with respect to the treatment of amblyopia. More recently, the UK Home Office records have shown that attention is now more focused on rodents, especially the mouse. This is likely to be due to the perceived need for genetically-altered animals, rather than to knowledge of the similarities and differences of vision in cats, NHPs and rodents, and the fact that the same techniques can be used for all of the species. We discuss the advantages and limitations of animal and non-animal methods for vision research, and assess their relative contributions to basic knowledge and clinical practice, as well as outlining the opportunities they offer for implementing the principles of the Three Rs (Replacement, Reduction and Refinement). 2016 FRAME.

  11. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  12. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. A HOLISTIC APPROACH FOR INSPECTION OF CIVIL INFRASTRUCTURES BASED ON COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    C. Stentoumis

    2016-06-01

    Full Text Available In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  14. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  15. m-BIRCH: an online clustering approach for computer vision applications

    Science.gov (United States)

    Madan, Siddharth K.; Dana, Kristin J.

    2015-03-01

    We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.

  16. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  17. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  18. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  19. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    Science.gov (United States)

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  20. Classification of Vehicle Types in Car Parks using Computer Vision Techniques

    Directory of Open Access Journals (Sweden)

    Chadly Marouane

    2015-08-01

    Full Text Available The growing population of big cities has led to certain issues, such as overloaded car parks. Ubiquitous systems can help to increase the capacity through an efficient usage of existing parking slots. In this case, cars are recognized during the entrance phases in order to guide them automatically to a proper slot for space-saving reasons. Prior to this step, it is necessary to determine the size of vehicles. In this work, we analyze different methods for vehicle classification and size measurement using the existing hardware of car parks. Computer vision techniques are applied for extracting information out of video streams of existing security cameras. For streams with lower resolution, a method is introduced figuring out width and height of a car with the help of reference objects. For streams with a higher resolution, a second approach is applied using face recognition algorithms and a training database in order to classify car types. Our evaluation of a real-life scenario at a major German airport showed a small error deviation of just a few centimeters for the fist method. For the type classification approach, an applicable accuracy of over 80 percent with up to 100 percent in certain cases have been achieved. Given these results, the performed methods show high potentials for a suitable determination of vehicles based on installed security cameras.

  1. Differentiation of Ecuadorian National and CCN-51 cocoa beans and their mixtures by computer vision.

    Science.gov (United States)

    Jimenez, Juan C; Amores, Freddy M; Solórzano, Eddyn G; Rodríguez, Gladys A; La Mantia, Alessandro; Blasi, Paolo; Loor, Rey G

    2018-05-01

    Ecuador exports two major types of cocoa beans, the highly regarded and lucrative National, known for its fine aroma, and the CCN-51 clone type, used in bulk for mass chocolate products. In order to discourage exportation of National cocoa adulterated with CCN-51, a fast and objective methodology for distinguishing between the two types of cocoa beans is needed. This study reports a methodology based on computer vision, which makes it possible to recognize these beans and determine the percentage of their mixture. The methodology was challenged with 336 samples of National cocoa and 127 of CCN-51. By excluding the samples with a low fermentation level and white beans, the model discriminated with a precision higher than 98%. The model was also able to identify and quantify adulterations in 75 export batches of National cocoa and separate out poorly fermented beans. A scientifically reliable methodology able to discriminate between Ecuadorian National and CCN-51 cocoa beans and their mixtures was successfully developed. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  2. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  3. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition.

    Science.gov (United States)

    Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus

    2016-10-28

    In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  4. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure

    Directory of Open Access Journals (Sweden)

    Jonathan P. Dandois

    2015-10-01

    Full Text Available Ecological remote sensing is being transformed by three-dimensional (3D, multispectral measurements of forest canopies by unmanned aerial vehicles (UAV and computer vision structure from motion (SFM algorithms. Yet applications of this technology have out-paced understanding of the relationship between collection method and data quality. Here, UAV-SFM remote sensing was used to produce 3D multispectral point clouds of Temperate Deciduous forests at different levels of UAV altitude, image overlap, weather, and image processing. Error in canopy height estimates was explained by the alignment of the canopy height model to the digital terrain model (R2 = 0.81 due to differences in lighting and image overlap. Accounting for this, no significant differences were observed in height error at different levels of lighting, altitude, and side overlap. Overall, accurate estimates of canopy height compared to field measurements (R2 = 0.86, RMSE = 3.6 m and LIDAR (R2 = 0.99, RMSE = 3.0 m were obtained under optimal conditions of clear lighting and high image overlap (>80%. Variation in point cloud quality appeared related to the behavior of SFM ‘image features’. Future research should consider the role of image features as the fundamental unit of SFM remote sensing, akin to the pixel of optical imaging and the laser pulse of LIDAR.

  5. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Bogdan Mocanu

    2016-10-01

    Full Text Available In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  6. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    Science.gov (United States)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  7. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Real-Time Evaluation of Breast Self-Examination Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Eman Mohammadi

    2014-01-01

    Full Text Available Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  9. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  10. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  11. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  12. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  13. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  14. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A clinical study on “Computer vision syndrome” and its management with Triphala eye drops and Saptamrita Lauha

    Science.gov (United States)

    Gangamma, M. P.; Poonam; Rajagopala, Manjusha

    2010-01-01

    American Optometric Association (AOA) defines computer vision syndrome (CVS) as “Complex of eye and vision problems related to near work, which are experienced during or related to computer use”. Most studies indicate that Video Display Terminal (VDT) operators report more eye related problems than non-VDT office workers. The causes for the inefficiencies and the visual symptoms are a combination of individual visual problems and poor office ergonomics. In this clinical study on “CVS”, 151 patients were registered, out of whom 141 completed the treatment. In Group A, 45 patients had been prescribed Triphala eye drops; in Group B, 53 patients had been prescribed the Triphala eye drops and Saptamrita Lauha tablets internally, and in Group C, 43 patients had been prescribed the placebo eye drops and placebo tablets. In total, marked improvement was observed in 48.89, 54.71 and 06.98% patients in groups A, B and C, respectively. PMID:22131717

  16. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030.

    Science.gov (United States)

    Slotnick, Jeffrey P; Khodadoust, Abdollah; Alonso, Juan J; Darmofal, David L; Gropp, William D; Lurie, Elizabeth A; Mavriplis, Dimitri J; Venkatakrishnan, Venkat

    2014-08-13

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be 'cleaner' and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Evaluation of body weight of sea cucumber Apostichopus japonicus by computer vision

    Science.gov (United States)

    Liu, Hui; Xu, Qiang; Liu, Shilin; Zhang, Libin; Yang, Hongsheng

    2015-01-01

    A postichopus japonicus (Holothuroidea, Echinodermata) is an ecological and economic species in East Asia. Conventional biometric monitoring method includes diving for samples and weighing above water, with highly variable in weight measurement due to variation in the quantity of water in the respiratory tree and intestinal content of this species. Recently, video survey method has been applied widely in biometric detection on underwater benthos. However, because of the high flexibility of A. japonicus body, video survey method of monitoring is less used in sea cucumber. In this study, we designed a model to evaluate the wet weight of A. japonicus, using machine vision technology combined with a support vector machine (SVM) that can be used in field surveys on the A. japonicus population. Continuous dorsal images of free-moving A. japonicus individuals in seawater were captured, which also allows for the development of images of the core body edge as well as thorn segmentation. Parameters that include body length, body breadth, perimeter and area, were extracted from the core body edge images and used in SVM regression, to predict the weight of A. japonicus and for comparison with a power model. Results indicate that the use of SVM for predicting the weight of 33 A. japonicus individuals is accurate ( R 2=0.99) and compatible with the power model ( R 2 =0.96). The image-based analysis and size-weight regression models in this study may be useful in body weight evaluation of A. japonicus in lab and field study.

  18. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    Science.gov (United States)

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  19. Desigining of Computer Vision Algorithm to Detect Sweet Pepper for Robotic Harvesting Under Natural Light

    Directory of Open Access Journals (Sweden)

    A Moghimi

    2015-03-01

    Full Text Available In recent years, automation in agricultural field has attracted more attention of researchers and greenhouse producers. The main reasons are to reduce the cost including labor cost and to reduce the hard working conditions in greenhouse. In present research, a vision system of harvesting robot was developed for recognition of green sweet pepper on plant under natural light. The major challenge of this study was noticeable color similarity between sweet pepper and plant leaves. To overcome this challenge, a new texture index based on edge density approximation (EDA has been defined and utilized in combination with color indices such as Hue, Saturation and excessive green index (EGI. Fifty images were captured from fifty sweet pepper plants to evaluate the algorithm. The algorithm could recognize 92 out of 107 (i. e., the detection accuracy of 86% sweet peppers located within the workspace of robot. The error of system in recognition of background, mostly leaves, as a green sweet pepper, decreased 92.98% by using the new defined texture index in comparison with color analysis. This showed the importance of integration of texture with color features when used for recognizing sweet peppers. The main reasons of errors, besides color similarity, were waxy and rough surface of sweet pepper that cause higher reflectance and non-uniform lighting on surface, respectively.

  20. CERN’s Computing rules updated to include policy for control systems

    CERN Document Server

    IT Department

    2008-01-01

    The use of CERN’s computing facilities is governed by rules defined in Operational Circular No. 5 and its subsidiary rules of use. These rules are available from the web site http://cern.ch/ComputingRules. Please note that the subsidiary rules for Internet/Network use have been updated to include a requirement that control systems comply with the CNIC(Computing and Network Infrastructure for Control) Security Policy. The security policy for control systems, which was approved earlier this year, can be accessed at https://edms.cern.ch/document/584092 IT Department

  1. Computational approach to radiogenomics of breast cancer: Luminal A and luminal B molecular subtypes are associated with imaging features on routine breast MRI extracted using computer vision algorithms.

    Science.gov (United States)

    Grimm, Lars J; Zhang, Jing; Mazurowski, Maciej A

    2015-10-01

    To identify associations between semiautomatically extracted MRI features and breast cancer molecular subtypes. We analyzed routine clinical pre-operative breast MRIs from 275 breast cancer patients at a single institution in this retrospective, Institutional Review Board-approved study. Six fellowship-trained breast imagers reviewed the MRIs and annotated the cancers. Computer vision algorithms were then used to extract 56 imaging features from the cancers including morphologic, texture, and dynamic features. Surrogate markers (estrogen receptor [ER], progesterone receptor [PR], human epidermal growth factor receptor-2 [HER2]) were used to categorize tumors by molecular subtype: ER/PR+, HER2- (luminal A); ER/PR+, HER2+ (luminal B); ER/PR-, HER2+ (HER2); ER/PR/HER2- (basal). A multivariate analysis was used to determine associations between the imaging features and molecular subtype. The imaging features were associated with both luminal A (P = 0.0007) and luminal B (P = 0.0063) molecular subtypes. No association was found for either HER2 (P = 0.2465) or basal (P = 0.1014) molecular subtype and the imaging features. A P-value of 0.0125 (0.05/4) was considered significant. Luminal A and luminal B molecular subtype breast cancer are associated with semiautomatically extracted features from routine contrast enhanced breast MRI. © 2015 Wiley Periodicals, Inc.

  2. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  3. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  4. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Science.gov (United States)

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  5. 78 FR 63492 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2013-10-24

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-847] Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof; Notice of Request for Statements on the Public Interest AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is...

  6. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    Science.gov (United States)

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... important aspect of visioning processes include the types of actors participating in the processes and the types of expertise included in the processes (scientific, lay, business etc.). The empirical part of the paper analyses eight national foresight activities from Denmark, Germany, Hungary, Malta...

  8. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    Science.gov (United States)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  9. The vision of David Marr.

    Science.gov (United States)

    Stevens, Kent A

    2012-01-01

    Marr proposed a computational paradigm for studying the visual system, wherein aspects of vision would be amenable to study with what might be regarded a computational-reductionist approach. First, vision would be cleaved into separable 'computational theories', in which the visual system is characterized in terms of its computational goals and the strategies by which they are carried out. Each such computational theory could then be investigated in increasingly concrete terms, from symbols and measurements, to representations and algorithms, to processes and neural implementations. This paradigm rests on some general expectations of a symbolic information processing system, including his stated principles of explicit naming, modular design, least commitment, and graceful degradation. In retrospect, the computational framework also tacitly rests on additional assumptions about the nature of biological information processing: (1) separability of computational strategies, (2) separability of representations, (3) a pipeline nature of information processing, and that (4) the representations employ primitives of low dimensionality. These assumptions are discussed in this retrospective.

  10. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  11. High accuracy position method based on computer vision and error analysis

    Science.gov (United States)

    Chen, Shihao; Shi, Zhongke

    2003-09-01

    The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.

  12. Microvision system (MVS): a 3D computer graphic-based microrobot telemanipulation and position feedback by vision

    Science.gov (United States)

    Sulzmann, Armin; Breguet, Jean-Marc; Jacot, Jacques

    1995-12-01

    The aim of our project is to control the position in 3D-space of a micro robot with sub micron accuracy and manipulate Microsystems aided by a real time 3D computer graphics (virtual reality). As Microsystems and micro structures become smaller, it is necessary to build a micro robot ((mu) -robot) capable of manipulating these systems and structures with a precision of 1 micrometers or even higher. These movements have to be controlled and guided. The first part of our project was to develop a real time 3D computer graphics (virtual reality) environment man-machine interface to guide the newly developed robot similar to the environment we built in a macroscopic robotics. Secondly we want to evaluate measurement techniques to verify its position in the region of interest (workspace). A new type of microrobot has been developed for our purposed. Its simple and compact design is believed to be of promise in the microrobotics field. Stepping motion allows speed up to 4 mm/s. Resolution smaller than 10 nm is achievable. We also focus on the vision system and on the virtual reality interface of the complex system. Basically the user interacts with the virtual 3D microscope and sees the (mu) -robot as if he is looking through a real microscope. He is able to simulate the assembly of the missing parts, e.g. parts of the micrometer, beforehand in order to verify the assembly manipulation steps such assembly of the missing parts, e.g. parts of a micromotor, beforehand in order to verify the assembly manipulation steps such as measuring, moving the table to the right position or performing the manipulation. Micro manipulation is form of a teleoperation is then performed by the robot-unit and the position is controlled by vision. First results have shown, that a guided manipulations with submicronics absolute accuracy can be achieved. Key idea of this approach is to use the intuitiveness of immersed vision to perform robotics tasks in an environment where human has only access

  13. 48 CFR 1552.239-103 - Acquisition of Energy Star Compliant Microcomputers, Including Personal Computers, Monitors and...

    Science.gov (United States)

    2010-10-01

    ... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products...

  14. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    Directory of Open Access Journals (Sweden)

    Junhwa Lee

    2017-10-01

    Full Text Available The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT, is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments.

  15. The Effect of the Usage of Computer-Based Assistive Devices on the Functioning and Quality of Life of Individuals Who Are Blind or Have Low Vision

    Science.gov (United States)

    Rosner, Yotam; Perlman, Amotz

    2018-01-01

    Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…

  16. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  17. Computer-related vision problems in Osogbo, south-western Nigeria ...

    African Journals Online (AJOL)

    Widespread use of computers for office work and e-learning has resulted in increased visual demands among computer users. The increased visual demands have led to development of ocular complaints and discomfort among users. The objective of this study is to determine the prevalence of computer related eye ...

  18. Research situation and development trend of the binocular stereo vision system

    Science.gov (United States)

    Wang, Tonghao; Liu, Bingqi; Wang, Ying; Chen, Yichao

    2017-05-01

    Since the 21st century, with the development of the computer and signal processing technology, a new comprehensive subject that called computer vision was generated. Computer vision covers a wide range of knowledge, which includes physics, mathematics, biology, computer technology and other arts subjects. It contains much content, and becomes more and more powerful, not only can realize the function of the human eye "see", also can realize the human eyes cannot. In recent years, binocular stereo vision which is a main branch of the computer vision has become the focus of the research in the field of the computer vision. In this paper, the binocular stereo vision system, the development of present situation and application at home and abroad are summarized. With the current problems of the binocular stereo vision system, his own opinions are given. Furthermore, a prospective view of the future application and development of this technology are prospected.

  19. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  20. Customized Computer Vision and Sensor System for Colony Recognition and Live Bacteria Counting in Agriculture

    Directory of Open Access Journals (Sweden)

    Gabriel M. ALVES

    2016-06-01

    Full Text Available This paper presents an arrangement based on a dedicated computer and charge-coupled device (CCD sensor system to intelligently allow the counting and recognition of colony formation. Microbes in agricultural environments are important catalysts of global carbon and nitrogen cycles, including the production and consumption of greenhouse gases in soil. Some microbes produce greenhouse gases such as carbon dioxide and nitrous oxide while decomposing organic matter in soil. Others consume methane from the atmosphere, helping to mitigate climate change. The magnitude of each of these processes is influenced by human activities and impacts the warming potential of Earth’s atmosphere. In this context, bacterial colony counting is important and requires sophisticated analysis methods. The method implemented in this study uses digital image processing techniques, including the Hough Transform for circular objects. The visual environment Borland Builder C++ was used for development, and a model for decision making was incorporated to aggregate intelligence. For calibration of the method a prepared illuminated chamber was used to enable analyses of the bacteria Escherichia coli, and Acidithiobacillus ferrooxidans. For validation, a set of comparisons were established between this smart method and the expert analyses. The results show the potential of this method for laboratory applications that involve the quantification and pattern recognition of bacterial colonies in solid culture environments.

  1. Monitoring and Optimization of the Process of Drying Fruits and Vegetables Using Computer Vision: A Review

    Directory of Open Access Journals (Sweden)

    Flavio Raponi

    2017-11-01

    Full Text Available An overview is given regarding the most recent use of non-destructive techniques during drying used to monitor quality changes in fruits and vegetables. Quality changes were commonly investigated in order to improve the sensory properties (i.e., appearance, texture, flavor and aroma, nutritive values, chemical constituents and mechanical properties of drying products. The application of single-point spectroscopy coupled with drying was discussed by virtue of its potentiality to improve the overall efficiency of the process. With a similar purpose, the implementation of a machine vision (MV system used to inspect foods during drying was investigated; MV, indeed, can easily monitor physical changes (e.g., color, size, texture and shape in fruits and vegetables during the drying process. Hyperspectral imaging spectroscopy is a sophisticated technology since it is able to combine the advantages of spectroscopy and machine vision. As a consequence, its application to drying of fruits and vegetables was reviewed. Finally, attention was focused on the implementation of sensors in an on-line process based on the technologies mentioned above. This is a necessary step in order to turn the conventional dryer into a smart dryer, which is a more sustainable way to produce high quality dried fruits and vegetables.

  2. A malaria diagnostic tool based on computer vision screening and visualization of Plasmodium falciparum candidate areas in digitized blood smears.

    Directory of Open Access Journals (Sweden)

    Nina Linder

    Full Text Available INTRODUCTION: Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. METHODS: Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27 and uninfected controls (n = 20 were digitally scanned with an oil immersion objective (0.1 µm/pixel to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. RESULTS: The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls. From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. CONCLUSION: We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for

  3. Sigma: computer vision in the service of safety and reliability in the inspection services; Sigma: la vision computacional al servicio de la seguridad y fiabilidad en los servicios de inspeccion

    Energy Technology Data Exchange (ETDEWEB)

    Pineiro, P. J.; Mendez, M.; Garcia, A.; Cabrera, E.; Regidor, J. J.

    2012-11-01

    Vision Computing is growing very fast in the last decade with very efficient tools and algorithms. This allows new development of applications in the nuclear field providing more efficient equipment and tasks: redundant systems, vision-guided mobile robots, automated visual defects recognition, measurement, etc., In this paper Tecnatom describes a detailed example of visual computing application developed to provide secure redundant identification of the thousands of tubes existing in a power plant steam generator. some other on-going or planned visual computing projects by Tecnatom are also introduced. New possibilities of application in the inspection systems for nuclear components appear where the main objective is to maximize their reliability. (Author) 6 refs.

  4. Realization for Chinese vehicle license plate recognition based on computer vision and fuzzy neural network

    Science.gov (United States)

    Yang, Yun; Zhang, Weigang; Guo, Pan

    2010-07-01

    The proposed approach in this paper is divided into three steps namely the location of plate, the segmentation of the characters and the recognition of the characters. The location algorithm is firstly consisted of two video captures to get high quality images, and estimates the size of vehicle plate in these images via parallel binocular stereo vision algorithm. Then the segmentation method extracts the edge of vehicle plate based on second generation non-orthogonal Haar wavelet transformation, and locates the vehicle plate according to the estimated result in the first step. Finally, the recognition algorithm is realized based on the Radial Basis Function Fuzzy Neural Network. Experiments have been conducted for real images. The results show this method can decrease the error recognition rate of Chinese license plate recognition.

  5. Vision, reanimated and reimagined.

    Science.gov (United States)

    Edelman, Shimon

    2012-01-01

    The publication in 1982 of David Marr's Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr's Vision.

  6. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    14. ABSTRACT The U.S. Air Force School of Aerospace Medicine Operational Based Vision Assessment Laboratory has developed a set of computer - based...Air Force School of Aerospace Medicine Operational Based Vision Assessment (OBVA) Laboratory has developed a set of computer -based, automated vision ...username of your computer ]  “App Data”  “Roaming”  Automated Vision Test”  “Settings”  “Calibration.” Once inside the “Calibration” folder

  7. Selected Publications in Image Understanding and Computer Vision from 1974 to 1983

    Science.gov (United States)

    1985-04-18

    Germany, September 26-28, 1978), Plenum, New York, 1979. 9. Reconnaissance des Formes et Intelligence Artificielle (2’me Congres AFCET-IRIA, Toulouse...the last decade. .To L..... ABBREVIATIONS - AI Artificial Intelligence BC Biological Cybernetics CACM Communications of the ACM CG Computer Graphics... Intelligence PACM Proceedings of the ACM "P-IEEE Proceedings of the IEEE P-NCC Proceedings of the National Computer Conference PR Pattern Recognition PRL

  8. Computer Vision Based Smart Lane Departure Warning System for Vehicle Dynamics Control

    Directory of Open Access Journals (Sweden)

    Ambarish G. Mohapatra

    2011-09-01

    Full Text Available Collision Avoidance System solves many problems caused by traffic congestion worldwide and a synergy of new information technologies for simulation, real-time control and communications networks. The above system is characterized as an intelligent vehicle system. Traffic congestion has been increasing world-wide as a result of increased motorization, urbanization, population growth and changes in population density. Congestion reduces utilization of the transportation infrastructure and increases travel time, air pollution, fuel consumption and most importantly traffic accidents. The main objective of this work is to develop a machine vision system for lane departure detection and warning to measure the lane related parameters such as heading angle, lateral deviation, yaw rate and sideslip angle from the road scene image using standard image processing technique that can be used for automation of steering a motor vehicle. The exact position of the steering wheel can be monitored using a steering wheel sensor. This core part of this work is based on Hough transformation based edge detection technique for the detection of lane departure parameters. The prototype designed for this work has been tested in a running vehicle for the monitoring of real-time lane related parameters.

  9. PTAC: a computer program for pressure-transient analysis, including the effects of cavitation. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Kot, C A; Youngdahl, C K

    1978-09-01

    PTAC was developed to predict pressure transients in nuclear-power-plant piping systems in which the possibility of cavitation must be considered. The program performs linear or nonlinear fluid-hammer calculations, using a fixed-grid method-of-characteristics solution procedure. In addition to pipe friction and elasticity, the program can treat a variety of flow components, pipe junctions, and boundary conditions, including arbitrary pressure sources and a sodium/water reaction. Essential features of transient cavitation are modeled by a modified column-separation technique. Comparisons of calculated results with available experimental data, for a simple piping arrangement, show good agreement and provide validation of the computational cavitation model. Calculations for a variety of piping networks, containing either liquid sodium or water, demonstrate the versatility of PTAC and clearly show that neglecting cavitation leads to erroneous predictions of pressure-time histories.

  10. Performance of human observers and an automatic 3-dimensional computer-vision-based locomotion scoring method to detect lameness and hoof lesions in dairy cows

    NARCIS (Netherlands)

    Schlageter-Tello, Andrés; Hertem, Van Tom; Bokkers, Eddie A.M.; Viazzi, Stefano; Bahr, Claudia; Lokhorst, Kees

    2018-01-01

    The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data

  11. A Computer Vision System forLocating and Identifying Internal Log Defects Using CT Imagery

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Frederick Lamb; Philip A. Araman

    1991-01-01

    A number of researchers have shown the ability of magnetic resonance imaging (MRI) and computer tomography (CT) imaging to detect internal defects in logs. However, if these devices are ever to play a role in the forest products industry, automatic methods for analyzing data from these devices must be developed. This paper reports research aimed at developing a...

  12. On the application of connectionist models for pattern recognition, robotics and computer vision : A technical report

    NARCIS (Netherlands)

    Kraaijveld, M.A.

    1989-01-01

    Connectionist modeis, commonly referred to as neural networks, are computing models in which large numbers of processing units are connected to each other with variable "weight". These weight values represent the "strength" of the connection between two units, which can be positive (excitatory, i.e.

  13. Preparation work for the replacement of a process computer: user vision

    International Nuclear Information System (INIS)

    Florit Diaz, C.

    2011-01-01

    The present paper describes the work needed to prepare the plant for its adaptation to the new system of mechanized operation support. In particular, it focuses on changes to different types of plant signals that reach the computer to conform to the requirements of the new data acquisition system.

  14. METHODS OF ASSESSING THE DEGREE OF DESTRUCTION OF RUBBER PRODUCTS USING COMPUTER VISION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2015-01-01

    Full Text Available For technical inspection of rubber products are essential methods of improving video scopes analyzing the degree of destruction and aging of rubber in an aggressive environment. The main factor determining the degree of destruction of the rubber product, the degree of coverage is cracked, which can be described as the amount of the total area, perimeter cracks, geometric shapes and other parameters. In the process of creating a methodology for assessing the degree of destruction of rubber products arises the problem of the development of machine vision algorithm for estimating the degree of coverage of the sample fractures and fracture characterization. For the development of image processing algorithm performed experimental studies on the artificial aging of several samples of products that are made from different rubbers. In the course of the experiments it was obtained several samples of shots vulcanizates in real time. To achieve the goals initially made light stabilization of array images using Gaussian filter. Thereafter, for each image binarization operation is applied. To highlight the contours of the surface damage of the sample is used Canny algorithm. The detected contours are converted into an array of pixels. However, a crack may be allocated to several contours. Therefore, an algorithm was developed by combining contours criterion of minimum distance between them. At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the Minkowski dimension. Show schedule obtained by the method parameters destruction of samples of rubber products. The developed method allows you to automate assessment of the degree of aging of rubber products in telemetry systems, to study the dynamics of the aging process of polymers to

  15. Dose computation in conformal radiation therapy including geometric uncertainties: Methods and clinical implications

    Science.gov (United States)

    Rosu, Mihaela

    The aim of any radiotherapy is to tailor the tumoricidal radiation dose to the target volume and to deliver as little radiation dose as possible to all other normal tissues. However, the motion and deformation induced in human tissue by ventilatory motion is a major issue, as standard practice usually uses only one computed tomography (CT) scan (and hence one instance of the patient's anatomy) for treatment planning. The interfraction movement that occurs due to physiological processes over time scales shorter than the delivery of one treatment fraction leads to differences between the planned and delivered dose distributions. Due to the influence of these differences on tumors and normal tissues, the tumor control probabilities and normal tissue complication probabilities are likely to be impacted upon in the face of organ motion. In this thesis we apply several methods to compute dose distributions that include the effects of the treatment geometric uncertainties by using the time-varying anatomical information as an alternative to the conventional Planning Target Volume (PTV) approach. The proposed methods depend on the model used to describe the patient's anatomy. The dose and fluence convolution approaches for rigid organ motion are discussed first, with application to liver tumors and the rigid component of the lung tumor movements. For non-rigid behavior a dose reconstruction method that allows the accumulation of the dose to the deforming anatomy is introduced, and applied for lung tumor treatments. Furthermore, we apply the cumulative dose approach to investigate how much information regarding the deforming patient anatomy is needed at the time of treatment planning for tumors located in thorax. The results are evaluated from a clinical perspective. All dose calculations are performed using a Monte Carlo based algorithm to ensure more realistic and more accurate handling of tissue heterogeneities---of particular importance in lung cancer treatment planning.

  16. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    Science.gov (United States)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  17. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

    Science.gov (United States)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  18. Automatic species recognition, length measurement and weight determination, using the CatchMeter computer vision system

    OpenAIRE

    Svellingen, Cato; Totland, Bjørn; White, Darren; Øvredal, Jan Tore

    2006-01-01

    The collection of biological data on species composition and individual length and weight of specimen has always been an important part of fisheries research. Traditionally, the collected information has been recorded on paper prior to being entered into a computer for analysis. Electronic measuring boards that record length measurements, such as the FishMeter (Øvredal and Totland, 2000), have made the data collection process more efficient and reliable. In this contribution we describe a vis...

  19. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.

  20. A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge

    Science.gov (United States)

    2016-07-29

    physical device technology with scalable manufacturing methods, a compatible computer architecture, and demonstrations of applications performance and...The Nation must preserve its leadership role in creating HPC technology and using it across a wide range of applications .9 Access to advanced...effort among researchers representing all areas—from services and applications down to the nano - architecture and materials level—to research, discover

  1. Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms

    Science.gov (United States)

    2015-05-01

    black and white”) image. Harris Feature Tracker Detects Harris corners (features) in an image. Compute Image Pyramid Resizes the image into several...density. To determine the probability that a normalized completion delay falls within the domain [a,b], we sum the area under the curve between x = a...and x = b; the total area under each curve is 1.0. Generally, distributions with the greatest area near 1.0 are best. Our goal is to understand the

  2. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  3. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  4. White paper: A vision for a computing initiative for MFE. Revised version

    International Nuclear Information System (INIS)

    Cohen, R.H.; Crotinger, J.A.; Baldwin, D.E.

    1996-01-01

    The scientific base of magnetic fusion research comprises three capabilities: experimental research, theoretical understanding and computational modeling, with modeling providing the necessary link between the other two. The US now faces a budget climate that will preclude the construction of major new MFE facilities and limit MFE experimental operations. The situation is rather analogous to the one experienced by the DOE Defense Programs (DP), in which continued viability of the nuclear stockpile must be ensured despite the prohibition of underground experimental tests. DP is meeting this challenge, in part, by launching the Accelerated Strategic Computing Initiative (ASCI) to bring advanced algorithms and new hardware to bear on the problems of science-based stockpile stewardship (SBSS). ASCI has as its goal the establishment of a ''virtual testing'' capability, and it is expected to drive scientific software and hardware development through the next decade. The authors argue that a similar effort is warranted for the MFE program, that is, an initiative aimed at developing a comprehensive simulation capability for MFE, with the goal of enabling ''virtual experiments.'' It would play a role for MFE analogous to that played by present-day and future (ASCI) codes for nuclear weapons design and by LASNEX for ICF, and provide a powerful augmentation to constrained experimental programs. Developing a comprehensive simulation capability could provide an organizing theme for a restructured science-based MFE program. The code would become a central vehicle for integrating the accumulating science base. In the context the authors propose, the relationship would ultimately be reversed: computer simulation would become a primary vehicle for exploration, with experiments providing the necessary confirmatory evidence (or guidance for code improvements)

  5. CTmod—A toolkit for Monte Carlo simulation of projections including scatter in computed tomography

    Czech Academy of Sciences Publication Activity Database

    Malušek, Alexandr; Sandborg, M.; Alm Carlsson, G.

    2008-01-01

    Roč. 90, č. 2 (2008), s. 167-178 ISSN 0169-2607 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * computed tomography * cone beam * scatter Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.220, year: 2008 http://dx.doi.org/10.1016/j.cmpb.2007.12.005

  6. Vision-Based Interest Point Extraction Evaluation in Multiple Environments

    National Research Council Canada - National Science Library

    McKeehan, Zachary D

    2008-01-01

    Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics...

  7. Development of a tool to aid the radiologic technologist using augmented reality and computer vision

    International Nuclear Information System (INIS)

    MacDougall, Robert D.; Scherrer, Benoit; Don, Steven

    2018-01-01

    This technical innovation describes the development of a novel device to aid technologists in reducing exposure variation and repeat imaging in computed and digital radiography. The device consists of a color video and depth camera in combination with proprietary software and user interface. A monitor in the x-ray control room displays the position of the patient in real time with respect to automatic exposure control chambers and image receptor area. The thickness of the body part of interest is automatically displayed along with a motion indicator for the examined body part. The aim is to provide an automatic measurement of patient thickness to set the x-ray technique and to assist the technologist in detecting errors in positioning and motion before the patient is exposed. The device has the potential to reduce the incidence of repeat imaging by addressing problems technologists encounter daily during the acquisition of radiographs. (orig.)

  8. Including Internet insurance as part of a hospital computer network security plan.

    Science.gov (United States)

    Riccardi, Ken

    2002-01-01

    Cyber attacks on a hospital's computer network is a new crime to be reckoned with. Should your hospital consider internet insurance? The author explains this new phenomenon and presents a risk assessment for determining network vulnerabilities.

  9. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  10. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    Science.gov (United States)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  11. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision and hobbyist unmanned aerial vehicles

    Science.gov (United States)

    Dandois, J. P.; Ellis, E. C.

    2013-12-01

    High spatial resolution three-dimensional (3D) measurements of vegetation by remote sensing are advancing ecological research and environmental management. However, substantial economic and logistical costs limit this application, especially for observing phenological dynamics in ecosystem structure and spectral traits. Here we demonstrate a new aerial remote sensing system enabling routine and inexpensive aerial 3D measurements of canopy structure and spectral attributes, with properties similar to those of LIDAR, but with RGB (red-green-blue) spectral attributes for each point, enabling high frequency observations within a single growing season. This 'Ecosynth' methodology applies photogrammetric ''Structure from Motion'' computer vision algorithms to large sets of highly overlapping low altitude (greenness were highly correlated (R2 = 0.88) with MODIS NDVI time series for the same area and vertical differences in canopy color revealed the early green up of the dominant canopy species, Liriodendron tulipifera, strong evidence that Ecosynth time series measurements capture vegetation structural and spectral dynamics at the spatial scale of individual trees. Observing canopy phenology in 3D at high temporal resolutions represents a breakthrough in forest ecology. Inexpensive user-deployed technologies for multispectral 3D scanning of vegetation at landscape scales (< 1 km2) heralds a new era of participatory remote sensing by field ecologists, community foresters and the interested public.

  12. High-Resolution, Semi-Automatic Fault Mapping Using Umanned Aerial Vehicles and Computer Vision: Mapping from an Armchair

    Science.gov (United States)

    Micklethwaite, S.; Vasuki, Y.; Turner, D.; Kovesi, P.; Holden, E.; Lucieer, A.

    2012-12-01

    Our ability to characterise fractures depends upon the accuracy and precision of field techniques, as well as the quantity of data that can be collected. Unmanned Aerial Vehicles (UAVs; otherwise known as "drones") and photogrammetry, provide exciting new opportunities for the accurate mapping of fracture networks, over large surface areas. We use a highly stable, 8 rotor, UAV platform (Oktokopter) with a digital SLR camera and the Structure-from-Motion computer vision technique, to generate point clouds, wireframes, digital elevation models and orthorectified photo mosaics. Furthermore, new image analysis methods such as phase congruency are applied to the data to semiautomatically map fault networks. A case study is provided of intersecting fault networks and associated damage, from Piccaninny Point in Tasmania, Australia. Outcrops >1 km in length can be surveyed in a single 5-10 minute flight, with pixel resolution ~1 cm. Centimetre scale precision can be achieved when selected ground control points are measured using a total station. These techniques have the potential to provide rapid, ultra-high resolution mapping of fracture networks, from many different lithologies; enabling us to more accurately assess the "fit" of observed data relative to model predictions, over a wide range of boundary conditions.igh resolution DEM of faulted outcrop (Piccaninny Point, Tasmania) generated using the Oktokopter UAV (inset) and photogrammetric techniques.

  13. Shared computational mechanism for tilt compensation accounts for biased verticality percepts in motion and pattern vision.

    Science.gov (United States)

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2008-02-01

    To determine the direction of object motion in external space, the brain must combine retinal motion signals and information about the orientation of the eyes in space. We assessed the accuracy of this process in eight laterally tilted subjects who aligned the motion direction of a random-dot pattern (30% coherence, moving at 6 degrees /s) with their perceived direction of gravity (motion vertical) in otherwise complete darkness. For comparison, we also tested the ability to align an adjustable visual line (12 degrees diameter) to the direction of gravity (line vertical). For small head tilts (60 degrees revealed a pattern of large systematic errors (often >30 degrees ) that was virtually identical in both tasks. Regression analysis confirmed that mean errors in the two tasks were closely related, with slopes close to 1.0 and correlations >0.89. Control experiments ruled out that motion settings were based on processing of individual single-dot paths. We conclude that the conversion of both motion direction and line orientation on the retina into a spatial frame of reference involves a shared computational strategy. Simulations with two spatial-orientation models suggest that the pattern of systematic errors may be the downside of an optimal strategy for dealing with imperfections in the tilt signal that is implemented before the reference-frame transformation.

  14. Image formation simulation for computer-aided inspection planning of machine vision systems

    Science.gov (United States)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  15. STUDY OF THE NAVIGATION OF A WHEELCHAIR USING COMPUTATIONAL VISION CONCEPTS

    Directory of Open Access Journals (Sweden)

    Marcos Batista Figueredo

    2017-03-01

    Full Text Available Mobility is an important means of social interaction that, besides allowing the accomplishment of several daily tasks, establishes a connection of the patient with the social and work universe. For people who have the so called paraplegias or tetraplegias, the wheelchair is an important means of exercising their citizenship. Several researches seek to make navigation simple and efficient, but, in general, the presented solutions have a great amount of sensing, intrusiveness and high cost. We propose a computational model that allows the navigation of a wheelchair using facial expressions. Unlike the works studied, we suggest a model that is based on two facial expressions: the pose of the head and the closing of the eyes, and only an input sensor, a USB camera. The model converts facial expressions into commands for navigating the chair, and two techniques make interpretation: Cascade Classifiers and Active Shape Models (ASM. In the first, it uses a classifier capable of detecting the closure of the eyes and in the second the marriage between the ASM response and the Pearson correlation coefficient. The tests show that the model has excellent accuracy and precision and a robust performance in the detection of closed eyes and pose estimation, bypassing very well the natural problems of pattern recognition such as occlusion and illumination. The model response achieved 98 \\% average hit with a false positive rate in the house of 2 \\%.

  16. Low Vision

    Science.gov (United States)

    ... Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 U.S. Age-Specific Prevalence ... Ethnicity 2010 Prevalence Rates of Low Vision by Race Table for 2010 Prevalence Rates of Low Vision ...

  17. Local annealing of shape memory alloys using laser scanning and computer vision

    Science.gov (United States)

    Hafez, Moustapha; Bellouard, Yves; Sidler, Thomas C.; Clavel, Reymond; Salathe, Rene-Paul

    2000-11-01

    A complete set-up for local annealing of Shape Memory Alloys (SMA) is proposed. Such alloys, when plastically deformed at a given low temperature, have the ability to recover a previously memorized shape simply by heating up to a higher temperature. They find more and more applications in the fields of robotics and micro engineering. There is a tremendous advantage in using local annealing because this process can produce monolithic parts, which have different mechanical behavior at different location of the same body. Using this approach, it is possible to integrate all the functionality of a device within one piece of material. The set-up is based on a 2W-laser diode emitting at 805nm and a scanner head. The laser beam is coupled into an optical fiber of 60(mu) in diameter. The fiber output is focused on the SMA work-piece using a relay lens system with a 1:1 magnification, resulting in a spot diameter of 60(mu) . An imaging system is used to control the position of the laser spot on the sample. In order to displace the spot on the surface a tip/tilt laser scanner is used. The scanner is positioned in a pre-objective configuration and allows a scan field size of more than 10 x 10 mm2. A graphical user interface of the scan field allows the user to quickly set up marks and alter their placement and power density. This is achieved by computer controlling X and Y positions of the scanner as well as the laser diode power. A SMA micro-gripper with a surface area less than 1 mm2 and an opening of the jaws of 200(mu) has been realized using this set-up. It is electrically actuated and a controlled force of 16mN can be applied to hold and release small objects such as graded index micro-lenses at a cycle time of typically 1s.

  18. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  19. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  20. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    Science.gov (United States)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the

  1. Application of Assistive Computer Vision Methods to Oyama Karate Techniques Recognition

    Directory of Open Access Journals (Sweden)

    Tomasz Hachaj

    2015-09-01

    Full Text Available In this paper we propose a novel algorithm that enables online actions segmentation and classification. The algorithm enables segmentation from an incoming motion capture (MoCap data stream, sport (or karate movement sequences that are later processed by classification algorithm. The segmentation is based on Gesture Description Language classifier that is trained with an unsupervised learning algorithm. The classification is performed by continuous density forward-only hidden Markov models (HMM classifier. Our methodology was evaluated on a unique dataset consisting of MoCap recordings of six Oyama karate martial artists including multiple champion of Kumite Knockdown Oyama karate. The dataset consists of 10 classes of actions and included dynamic actions of stands, kicks and blocking techniques. Total number of samples was 1236. We have examined several HMM classifiers with various number of hidden states and also Gaussian mixture model (GMM classifier to empirically find the best setup of the proposed method in our dataset. We have used leave-one-out cross validation. The recognition rate of our methodology differs between karate techniques and is in the range of 81% ± 15% even to 100%. Our method is not limited for this class of actions but can be easily adapted to any other MoCap-based actions. The description of our approach and its evaluation are the main contributions of this paper. The results presented in this paper are effects of pioneering research on online karate action classification.

  2. Automated analysis of retinal imaging using machine learning techniques for computer vision.

    Science.gov (United States)

    De Fauw, Jeffrey; Keane, Pearse; Tomasev, Nenad; Visentin, Daniel; van den Driessche, George; Johnson, Mike; Hughes, Cian O; Chu, Carlton; Ledsam, Joseph; Back, Trevor; Peto, Tunde; Rees, Geraint; Montgomery, Hugh; Raine, Rosalind; Ronneberger, Olaf; Cornebise, Julien

    2016-01-01

    There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular ("wet") age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the 'back' of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  3. Computer vision and machine learning for robust phenotyping in genome-wide studies

    Science.gov (United States)

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  4. Computer vision and machine learning for robust phenotyping in genome-wide studies.

    Science.gov (United States)

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R V Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K

    2017-03-08

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems.

  5. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  6. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2009-12-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  7. COMPUTATIONAL VISION IN UV-MAPPING OF TEXTURED MESHES COMING FROM PHOTOGRAMMETRIC RECOVERY: UNWRAPPING FRESCOED VAULTS

    Directory of Open Access Journals (Sweden)

    P. G. Robleda

    2016-06-01

    Full Text Available Sometimes it is difficult to represent “on paper" the existing reality of architectonic elements, depending on the complexity of his geometry, but not only in cases with complex geometries: non-relief surfaces, can need a “special planar format” for its graphical representation. Nowadays, there are a lot of methods to obtain tridimensional recovery of our Cultural Heritage with different ranges of the relationship accuracy / costs, even getting high accuracy using “low-cost” recovery methods as digital photogrammetry, which allow us easily to obtain a graphical representation “on paper”: ortho-images of different points of view. This can be useful for many purposes but, for others, an orthographic projection is not really very interesting. In non-site restoration tasks of frescoed vaults, a “planar format” representation in needed to see in true magnitude the paintings represented on the intrados vault, because of the general methodology used: gluing the fresco on a fabric, removing the fresco-fabric from the support, moving to laboratory, removing the fresco from the fabric, restoring the fresco, gluing back the restored fresco on another fabric, laying the restored fresco on the original location and removing the fabric. Because of this, many times, an unfolded model is needed, in a similar way a cylinder or cone can be unfolded, but in this case with a texture included: UV unwrapping. Unfold and fold-back processes, can be especially interesting in restoration field of frescoed vaults and domes at: chromatic recovery of paintings, reconstruction of partially missed geometries, transference of paintings on surfaces, etc.

  8. Use of Data Mining and Computer Vision Algorithms in Studies of Magnetic Reconnection

    Science.gov (United States)

    Sipes, T.; Karimabadi, H.; Gosling, J. T.; Phan, T.; Yilmaz, A.

    2011-12-01

    Knowledge discovery from large data sets collected from spacecraft measurements as well as petascale simulations remains a major obstacle to scientific progress. For example, our recent 3D kinetic simulation of reconnection included over 3 trillion particles and generated well over 200 TB of data. Similarly identification of interesting features in spacecraft data can be quite time consuming and by definition focuses on simpler features as human eye has limited capability in deciphering complex patterns and dependencies. Machine learning algorithms offer a solution to this problem. Here we present our latest results on use of machine learning algorithms in analysis of (i) 2D and 3D kinetic simulations of reconnection and (ii) reconnection events in the solar wind using Wind data. The results are quite promising and point to the power of these techniques to find hidden relationships. For example, identification of flux ropes in the solar wind remains quite controversial since unlike the magnetopause where one can search for bipolar signatures of the magnetic field component in the boundary normal coordinates, there are no generally agreed upon method of identifying them. As a preparation for this, we show results of our technique applied to time series generated from simulations of flux ropes. We find that the algorithms were not only able to detect flux ropes in the simulation data very accurately, but they were also able to distinguish crossings across a flux rope versus those along the axis of a flux rope. In case of spacecraft data, our models were able to detect crossings of the reconnection exhausts and distinguish them from non-exhausts. Finally, we use machine learning algorithms to compare the crossings of reconnection exhausts from simulations and spacecraft observations in the solar wind.

  9. An integrated computable general equilibrium model including multiple types and uses of water

    OpenAIRE

    Luckmann, Jonas Jens

    2015-01-01

    Water is a scarce resource in many regions of the world and competition for water is an increasing problem. To countervail this trend policies are needed regulating supply and demand for water. As water is used in many economic activities, water related management decisions usually have complex implications. Economic simulation models have been proven useful to ex-ante assess the consequences of policy changes. Specifically, Computable General Equilibrium (CGE) models are very suitable to ana...

  10. CeleST: Computer Vision Software for Quantitative Analysis of C. elegans Swim Behavior Reveals Novel Features of Locomotion

    Science.gov (United States)

    Vora, Mehul M.; Guo, Suzhen; Metaxas, Dimitris; Driscoll, Monica

    2014-01-01

    In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim “gaits” in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes “graceful” from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible. PMID:25033081

  11. CeleST: computer vision software for quantitative analysis of C. elegans swim behavior reveals novel features of locomotion.

    Directory of Open Access Journals (Sweden)

    Christophe Restif

    2014-07-01

    Full Text Available In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim "gaits" in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes "graceful" from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible.

  12. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology

    Science.gov (United States)

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-01

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861 cm- 1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082 cm- 1 (v1 band), 712 cm- 1 and 699 cm- 1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862 cm- 1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350 cm- 1, 730-680 cm- 1) and (830-880 cm- 1, 690-725 cm- 1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner.

  13. Colour vision deficiency.

    Science.gov (United States)

    Simunovic, M P

    2010-05-01

    Colour vision deficiency is one of the commonest disorders of vision and can be divided into congenital and acquired forms. Congenital colour vision deficiency affects as many as 8% of males and 0.5% of females--the difference in prevalence reflects the fact that the commonest forms of congenital colour vision deficiency are inherited in an X-linked recessive manner. Until relatively recently, our understanding of the pathophysiological basis of colour vision deficiency largely rested on behavioural data; however, modern molecular genetic techniques have helped to elucidate its mechanisms. The current management of congenital colour vision deficiency lies chiefly in appropriate counselling (including career counselling). Although visual aids may be of benefit to those with colour vision deficiency when performing certain tasks, the evidence suggests that they do not enable wearers to obtain normal colour discrimination. In the future, gene therapy remains a possibility, with animal models demonstrating amelioration following treatment.

  14. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  15. Computer vision for sports

    DEFF Research Database (Denmark)

    Thomas, Graham; Gade, Rikke; Moeslund, Thomas B.

    2017-01-01

    The world of sports intrinsically involves fast and accurate motion that is not only challenging for competitors to master, but can be difficult for coaches and trainers to analyze, and for audiences to follow. The nature of most sports means that monitoring by the use of sensors or other devices...

  16. Role of isotope scan, including positron emission tomography/computed tomography, in nodular goitre.

    Science.gov (United States)

    Giovanella, Luca; Ceriani, Luca; Treglia, Giorgio

    2014-08-01

    Nuclear medicine techniques were first used in clinical practice for diagnosing and treating thyroid diseases in the 1950s, and are still an integral part of thyroid nodules work-up. Thyroid imaging with iodine or iodine-analogue isotopes is the only examination able to prove the presence of autonomously functioning thyroid tissue, which excludes malignancy with a high probability. In addition, a thyroid scan with technetium-99m-methoxyisobutylisonitrile is able to avoid unnecessary surgical procedures for cytologically inconclusive thyroid nodules, as confirmed by meta-analysis and cost-effectiveness studies. Finally, positron emission tomography alone, and positron emission tomography combined with computed tomography scans with (18)F-fluoro-2-deoxy-d-glucose are also promising for diagnosing thyroid diseases, but further studies are needed before introducing them to clinical practice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Artificial vision.

    Science.gov (United States)

    Zarbin, M; Montemagno, C; Leary, J; Ritch, R

    2011-09-01

    A number treatment options are emerging for patients with retinal degenerative disease, including gene therapy, trophic factor therapy, visual cycle inhibitors (e.g., for patients with Stargardt disease and allied conditions), and cell transplantation. A radically different approach, which will augment but not replace these options, is termed neural prosthetics ("artificial vision"). Although rewiring of inner retinal circuits and inner retinal neuronal degeneration occur in association with photoreceptor degeneration in retinitis pigmentosa (RP), it is possible to create visually useful percepts by stimulating retinal ganglion cells electrically. This fact has lead to the development of techniques to induce photosensitivity in cells that are not light sensitive normally as well as to the development of the bionic retina. Advances in artificial vision continue at a robust pace. These advances are based on the use of molecular engineering and nanotechnology to render cells light-sensitive, to target ion channels to the appropriate cell type (e.g., bipolar cell) and/or cell region (e.g., dendritic tree vs. soma), and on sophisticated image processing algorithms that take advantage of our knowledge of signal processing in the retina. Combined with advances in gene therapy, pathway-based therapy, and cell-based therapy, "artificial vision" technologies create a powerful armamentarium with which ophthalmologists will be able to treat blindness in patients who have a variety of degenerative retinal diseases.

  18. 31 CFR 351.66 - What book-entry Series EE savings bonds are included in the computation?

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What book-entry Series EE savings... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.66 What book-entry Series EE savings bonds are included in the computation? (a) We include all bonds that...

  19. Efficient discrete Gabor functions for robot vision

    Science.gov (United States)

    Weiman, Carl F. R.

    1994-03-01

    A new discrete Gabor function provides subpixel resolution of phase while overcoming many of the computational burdens of current approaches to Gabor function implementation. Applications include hyperacuity measurement of binocular disparity and optic flow for stereo vision. Convolution is avoided by exploiting band-pass to subsample the image plane. A general purpose front end processor for robot vision, based on a wavelet interpretation of this discrete Gabor function, can be constructed by tessellating and pyramiding the elementary filter. Computational efficiency opens the door to real-time implementation which mimics many properties of the simple and complex cells in the visual cortex.

  20. Preparation work for the replacement of a process computer: user vision; Trabajos de preparacion para la sustitucion de un ordenador de proceso: Vision de usuario

    Energy Technology Data Exchange (ETDEWEB)

    Florit Diaz, C.

    2011-07-01

    The present paper describes the work needed to prepare the plant for its adaptation to the new system of mechanized operation support. In particular, it focuses on changes to different types of plant signals that reach the computer to conform to the requirements of the new data acquisition system.

  1. A Real Time Quality Monitoring System for the Lighting Industry: A Practical and Rapid Approach Using Computer Vision and Image Processing (CVIP Tools

    Directory of Open Access Journals (Sweden)

    C.K. Ng

    2011-11-01

    Full Text Available In China, the manufacturing of lighting products is very labour intensive. The approach used to check quality and control production relies on operators who test using various types of fixtures. In order to increase the competitiveness of the manufacturer and the efficiency of production, the authors propose an integrated system. This system has two major elements: a computer vision system (CVS and a real‐time monitoring system (RTMS. This model focuses not only on the rapid and practical application of modern technology to a traditional industry, but also represents a process innovation in the lighting industry. This paper describes the design and development of the prototyped lighting inspection system based on a practical and fast approach using computer vision and imaging processing (CVIP tools. LabVIEW with IMAQ Vision Builder is the chosen tool for building the CVS. Experimental results show that this system produces a lower error rate than humans produce in the quality checking process. The whole integrated manufacturing strategy, aimed at achieving a better performance, is most suitable for a China and other labour intensive environments such as India.

  2. 77 FR 34063 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-06-08

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-847] Certain Electronic Devices, Including Mobile.... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that a complaint was filed with the U.S. International Trade Commission on May 2, 2012, under section 337 of the Tariff Act of 1930...

  3. 77 FR 27078 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-05-08

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2896] Certain Electronic Devices, Including Mobile... Comments Relating to the Public Interest AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International Trade Commission has received a complaint...

  4. 78 FR 1247 - Certain Electronic Devices, Including Wireless Communication Devices, Tablet Computers, Media...

    Science.gov (United States)

    2013-01-08

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-862] Certain Electronic Devices, Including...; Institution of Investigation Pursuant to United States Code AGENCY: U.S. International Trade Commission... Trade Commission on November 30, 2012, under section 337 of the Tariff Act of 1930, as amended, 19 U.S.C...

  5. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  6. Human factors design of nuclear power plant control rooms including computer-based operator aids

    International Nuclear Information System (INIS)

    Bastl, W.; Felkel, L.; Becker, G.; Bohr, E.

    1983-01-01

    The scientific handling of human factors problems in control rooms began around 1970 on the basis of safety considerations. Some recent research work deals with the development of computerized systems like plant balance calculation, safety parameter display, alarm reduction and disturbance analysis. For disturbance analysis purposes it is necessary to homogenize the information presented to the operator according to the actual plant situation in order to supply the operator with the information he most urgently needs at the time. Different approaches for solving this problem are discussed, and an overview is given on what is being done. Other research projects concentrate on the detailed analysis of operators' diagnosis strategies in unexpected situations, in order to obtain a better understanding of their mental processes and the influences upon them when such situations occur. This project involves the use of a simulator and sophisticated recording and analysis methods. Control rooms are currently designed with the aid of mock-ups. They enable operators to contribute their experience to the optimization of the arrangement of displays and controls. Modern control rooms are characterized by increasing use of process computers and CRT (Cathode Ray Tube) displays. A general concept for the integration of the new computerized system and the conventional control panels is needed. The technical changes modify operators' tasks, and future ergonomic work in nuclear plants will need to consider the re-allocation of function between man and machine, the incorporation of task changes in training programmes, and the optimal design of information presentation using CRTs. Aspects of developments in control room design are detailed, typical research results are dealt with, and a brief forecast of the ergonomic contribution to be made in the Federal Republic of Germany is given

  7. Neural Network Prediction of Failure of Damaged Composite Pressure Vessels from Strain Field Data Acquired by a Computer Vision Method

    Science.gov (United States)

    Russell, Samuel S.; Lansing, Matthew D.

    1997-01-01

    This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC

  8. Computation of binding energies including their enthalpy and entropy components for protein-ligand complexes using support vector machines.

    Science.gov (United States)

    Koppisetty, Chaitanya A K; Frank, Martin; Kemp, Graham J L; Nyholm, Per-Georg

    2013-10-28

    Computing binding energies of protein-ligand complexes including their enthalpy and entropy terms by means of computational methods is an appealing approach for selecting initial hits and for further optimization in early stages of drug discovery. Despite the importance, computational predictions of thermodynamic components have evaded attention and reasonable solutions. In this study, support vector machines are used for developing scoring functions to compute binding energies and their enthalpy and entropy components of protein-ligand complexes. The binding energies computed from our newly derived scoring functions have better Pearson's correlation coefficients with experimental data than previously reported scoring functions in benchmarks for protein-ligand complexes from the PDBBind database. The protein-ligand complexes with binding energies dominated by enthalpy or entropy term could be qualitatively classified by the newly derived scoring functions with high accuracy. Furthermore, it is found that the inclusion of comprehensive descriptors based on ligand properties in the scoring functions improved the accuracy of classification as well as the prediction of binding energies including their thermodynamic components. The prediction of binding energies including the enthalpy and entropy components using the support vector machine based scoring functions should be of value in the drug discovery process.

  9. Embodied Visions

    DEFF Research Database (Denmark)

    Grodal, Torben Kragh

    melodramas - from evolutionary and psychological perspectives, the author also reflects on social issues at the intersection of film theory and neuropsychology. These include moral problems in film viewing, ow we experience realism and character identification, and the value of the subjective forms......Embodied Visions presents a groundbreaking analysis of film through the lens of bioculturalism, revealing how human biology as well as human culture determine how films are made and experienced. Throughout the book the author uses the breakthroughs of modern brain science to explain general...

  10. Experience in nuclear materials accountancy, including the use of computers, in the UKAEA

    International Nuclear Information System (INIS)

    Anderson, A.R.; Adamson, A.S.; Good, P.T.; Terrey, D.R.

    1976-01-01

    The UKAEA have operated systems of nuclear materials accountancy in research and development establishments handling large quantities of material for over 20 years. In the course of that time changing requirements for nuclear materials control and increasing quantities of materials have required that accountancy systems be modified and altered to improve either the fundamental system or manpower utilization. The same accountancy principles are applied throughout the Authority but procedures at the different establishments vary according to the nature of their specific requirements; there is much in the cumulative experience of the UKAEA which could prove of value to other organizations concerned with nuclear materials accountancy or safeguards. This paper reviews the present accountancy system in the UKAEA and summarizes its advantages. Details are given of specific experience and solutions which have been found to overcome difficulties or to strengthen previous weak points. Areas discussed include the use of measurements, the establishment of measurement points (which is relevant to the designation of MBAs), the importance of regular physical stock-taking, and the benefits stemming from the existence of a separate accountancy section independent of operational management at large establishments. Some experience of a dual system of accountancy and criticality control is reported, and the present status of computerization of nuclear material accounts is summarized. Important aspects of the relationship between management systems of accountancy and safeguards' requirements are discussed briefly. (author)

  11. The utility of including pathology reports in improving the computational identification of patients

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2016-01-01

    Full Text Available Background: Celiac disease (CD is a common autoimmune disorder. Efficient identification of patients may improve chronic management of the disease. Prior studies have shown searching International Classification of Diseases-9 (ICD-9 codes alone is inaccurate for identifying patients with CD. In this study, we developed automated classification algorithms leveraging pathology reports and other clinical data in Electronic Health Records (EHRs to refine the subset population preselected using ICD-9 code (579.0. Materials and Methods: EHRs were searched for established ICD-9 code (579.0 suggesting CD, based on which an initial identification of cases was obtained. In addition, laboratory results for tissue transglutaminse were extracted. Using natural language processing we analyzed pathology reports from upper endoscopy. Twelve machine learning classifiers using different combinations of variables related to ICD-9 CD status, laboratory result status, and pathology reports were experimented to find the best possible CD classifier. Ten-fold cross-validation was used to assess the results. Results: A total of 1498 patient records were used including 363 confirmed cases and 1135 false positive cases that served as controls. Logistic model based on both clinical and pathology report features produced the best results: Kappa of 0.78, F1 of 0.92, and area under the curve (AUC of 0.94, whereas in contrast using ICD-9 only generated poor results: Kappa of 0.28, F1 of 0.75, and AUC of 0.63. Conclusion: Our automated classification system presented an efficient and reliable way to improve the performance of CD patient identification.

  12. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification.

    Science.gov (United States)

    Unger, Jakob; Merhof, Dorit; Renner, Susanne

    2016-11-16

    Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.

  13. Research on Internal Layout Optimization of Logistics Node under the Conditions of Complex Terrain Based on Computer Vision and Geographical Simulation System

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2012-01-01

    Full Text Available This paper solves the problem of logistics node space relationship beyond expression based on computer vision technology, proposes internal layout optimization mathematical model of logistics node on the basis of overall consideration of function zone geometry shape, the optimal area utilization rate, and the minimum material handling cost, and then designs a highly mixed genetic simulated annealing algorithm based on multiagent to get layout solution. Through contrasting, the result has shown that the model and algorithms put forward in this paper can realize large-scale internal layout optimization of logistics node under the conditions of complex terrain and multiple constraints.

  14. PTA-1 computer program for treating pressure transients in hydraulic networks including the effect of pipe plasticity

    International Nuclear Information System (INIS)

    Youngdahl, C.K.; Kot, C.A.

    1977-01-01

    Pressure pulses in the intermediate sodium system of a liquid-metal-cooled fast breeder reactor, such as may originate from a sodium/water reaction in a steam generator, are propagated through the complex sodium piping network to system components such as the pump and intermediate heat exchanger. To assess the effects of such pulses on continued reliable operation of these components and to contribute to system designs which result in the mitigation of these effects, Pressure Transient Analysis (PTA) computer codes are being developed for accurately computing the transmission of pressure pulses through a complicated fluid transport system, consisting of piping, fittings and junctions, and components. PTA-1 provides an extension of the well-accepted and verified fluid hammer formulation for computing hydraulic transients in elastic or rigid piping systems to include plastic deformation effects. The accuracy of the modeling of pipe plasticity effects on transient propagation has been validated using results from two sets of Stanford Research Institute experiments. Validation of PTA-1 using the latter set of experiments is described briefly. The comparisons of PTA-1 computations with experiments show that (1) elastic-plastic deformation of LMFBR-type piping can have a significant qualitative and quantitative effect on pressure pulse propagation, even in simple systems; (2) classical fluid-hammer theory gives erroneous results when applied to situations where piping deforms plastically; and (3) the computational model incorporated in PTA-1 for predicting plastic deformation and its effect on transient propagation is accurate

  15. Edward Rhodes Stitt Award Lecture. Will a computer (with artificial vision) replace the surgical pathologist (or other health professionals)?

    Science.gov (United States)

    Heffner, D K

    1994-04-01

    Many jobs require vision for most of the tasks performed and the discussion focuses on the nature of human visual perception. Arguments are given to support the claim that visual perception is a very complicated function of the brain. To attempt to answer whether or not artificial intelligence (AI) will ever be able to essentially do what the brain does, the history and current state of AI research is examined, with special attention to neural net research.

  16. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  17. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    Science.gov (United States)

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2018-03-01

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  18. Visual Hazards Associated With Using Computers | Jegede ...

    African Journals Online (AJOL)

    The aim of the study was to determine the hazards associated with using computers. A survey of 100 computer users working in business centers in Ilorin, Kwara State was done. Some of the visual hazards noted included; headache, eye redness, eye ache, double (blurred) vision, diminishing vision, eye watering and eye ...

  19. Recent advances in the development and transfer of machine vision technologies for space

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  20. Rendering for machine vision prototyping

    Science.gov (United States)

    Reiner, Jacek

    2008-09-01

    Machine Vision systems for manufacturing quality inspection are interdisciplinary solutions including lighting, optics, cameras, image processing, segmentation, feature analysis, classification as well as integration with manufacturing process. The design and optimization of the above systems, especially image acquisition setup is mainly driven by experiment. This requires deep know-how and well equipped laboratory, which does not guarantee the optimal development process and results. This paper proposes novel usage of rendering, originating from 3D computer graphics, for machine vision prototyping and optimization. The invented technique and physically-based rendering aids selection or optimization of luminaires, tolerancing of mechanical construction and object handling, robustness predetermination or surface flaw simulation. The rendering setup utilizes mesh modeling, bump and normal mapping and light distribution sharpening with IES data files. The performed light simulation experiments for metal surfaces (face surface of bearing rollers) are validated.

  1. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  2. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  3. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  4. ICECON: a computer program used to calculate containment back pressure for LOCA analysis (including ice condenser plants)

    International Nuclear Information System (INIS)

    1976-07-01

    The ICECON computer code provides a method for conservatively calculating the long term back pressure transient in the containment resulting from a hypothetical Loss-of-Coolant Accident (LOCA) for PWR plants including ice condenser containment systems. The ICECON computer code was developed from the CONTEMPT/LT-022 code. A brief discussion of the salient features of a typical ice condenser containment is presented. Details of the ice condenser models are explained. The corrections and improvements made to CONTEMPT/LT-022 are included. The organization of the code, including the calculational procedure, is outlined. The user's manual, to be used in conjunction with the CONTEMPT/LT-022 user's manual, a sample problem, a time-step study (solution convergence) and a comparison of ICECON results with the results of the NSSS vendor are presented. In general, containment pressure calculated with the ICECON code agree with those calculated by the NSSS vendor using the same mass and energy release rates to the containment

  5. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  6. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  7. Mapping Sub-Saharan African Agriculture in High-Resolution Satellite Imagery with Computer Vision & Machine Learning

    Science.gov (United States)

    Debats, Stephanie Renee

    Smallholder farms dominate in many parts of the world, including Sub-Saharan Africa. These systems are characterized by small, heterogeneous, and often indistinct field patterns, requiring a specialized methodology to map agricultural landcover. In this thesis, we developed a benchmark labeled data set of high-resolution satellite imagery of agricultural fields in South Africa. We presented a new approach to mapping agricultural fields, based on efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. The algorithm achieved similar high performance across agricultural types, including spectrally indistinct smallholder fields, and demonstrated the ability to generalize across large geographic areas. In sensitivity analyses, we determined multi-temporal images provided greater performance gains than the addition of multi-spectral bands. We also demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples. This thesis furthers the goal of providing accurate agricultural landcover maps, at a scale that is relevant for the dominant smallholder class. Accurate maps are crucial for monitoring and promoting agricultural production. Furthermore, improved agricultural landcover maps will aid a host of other applications, including landcover change assessments, cadastral surveys to strengthen smallholder land rights, and constraints for crop modeling

  8. Overview of sports vision

    Science.gov (United States)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  9. Vision by Man and Machine.

    Science.gov (United States)

    Poggio, Tomaso

    1984-01-01

    Studies of stereo vision guide research on how animals see and how computers might accomplish this human activity. Discusses a sequence of algorithms to first extract information from visual images and then to calculate the depths of objects in the three-dimensional world, concentrating on stereopsis (stereo vision). (JN)

  10. 75 FR 41522 - Novell, Inc., Including On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS...

    Science.gov (United States)

    2010-07-16

    ... related to research, design and technical support for the production of computer software. The company reports that workers leased from Affiliated Computer Services, Inc., (ACS) were employed on-site at the... Computer Services, Inc., (ACS), Provo, UT; Amended Certification Regarding Eligibility To Apply for Worker...

  11. Marr's vision: twenty-five years on.

    Science.gov (United States)

    Glennerster, Andrew

    2007-06-05

    It is twenty-five years since the posthumous publication of David Marr's book Vision[1]. Only 35 years old when he died, Marr had already dramatically influenced vision research. His book, and the series of papers that preceded it, have had a lasting impact on the way that researchers approach human and computer vision.

  12. Marr's vision: Twenty-five years on

    OpenAIRE

    Glennerster, Andrew

    2007-01-01

    It is twenty-five years since the posthumous publication of David Marr's book Vision [1]. Only 35 years old when he died, Man, had already dramatically influenced vision research. His book, and the series of papers that preceded it, have had a lasting impact on the way that researchers approach human and computer vision.

  13. NCG61/5: Programa de Doctorado Conjunto Erasmus Mundus en Visi??n Computacional ??? ComVIs (Erasmus Mundus Joint Doctoral Programme in Computer Vision- EMJD ComVis)

    OpenAIRE

    Universidad de Granada

    2012-01-01

    Programa de Doctorado Conjunto Erasmus Mundus en Visi??n Computacional ??? ComVIs (Erasmus Mundus Joint Doctoral Programme in Computer Vision-EMJD ComVis). Aprobado en la sesi??n extraordinaria del Consejo de Gobierno de 2 de mayo de 2012

  14. Exsanguination of turbot and the effect on fillet quality measured mechanically by sensory evaluation, and with computer vision

    NARCIS (Netherlands)

    Roth, B.; Schelvis-Smit, A.A.M.; Stien, L.H.; Foss, A.; Nortvedt, R.; Imsland, A.

    2007-01-01

    In order to investigate the impact of blood residues on the end quality of exsanguinated and unbled farmed turbot (Scophthalmus maximus), meat quality was evaluated using mechanical, sensory, and computer imaging techniques. The results show that exsanguination is important for improving the visual

  15. Exsanguination of turbot and the effect on fillet quality measured mechanically, by sensory evaluation, and with computer vision.

    Science.gov (United States)

    Roth, B; Schelvis-Smit, R; Stien, L H; Foss, A; Nortvedt, R; Imsland, A

    2007-11-01

    In order to investigate the impact of blood residues on the end quality of exsanguinated and unbled farmed turbot (Scophthalmus maximus), meat quality was evaluated using mechanical, sensory, and computer imaging techniques. The results show that exsanguination is important for improving the visual appearance, and the blood residue could be quantified using a computer imaging system. After 6 d of storage, mechanical analysis using puncture test or shear force showed no difference between exsanguinated and unbled fish. The trained taste panel was unable to detect any differences between exsanguinated and unbled fish after 6 and 14 d of storage. We conclude that over a 2-wk period the blood residue in turbot meat does not affect texture or sensory quality, but does affect the visual appearance.

  16. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  17. Machine Vision For Industrial Control:The Unsung Opportunity

    Science.gov (United States)

    Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.

    1984-05-01

    Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.

  18. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  19. Vision-based human motion analysis: An overview

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    2007-01-01

    Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human-Computer

  20. Síndrome de visión de la computadora en estudiantes preuniversitarios Computer vision syndrome observed in high school students

    Directory of Open Access Journals (Sweden)

    María Emilia Fernández González

    2010-01-01

    Full Text Available OBJETIVO: Describir el comportamiento clínico-epidemiológico del síndrome de visión de la computadora en estudiantes de décimo grado del preuniversitario "Rafael María de Mendive" desde septiembre del 2007 a junio del 2008. MÉTODOS: Se realizó un estudio descriptivo y transversal. El universo estuvo constituido por todos los alumnos del grado con manifestaciones clínicas relacionadas con el uso de la computadora (183 pacientes y la muestra fue de 45, tomada mediante un muestreo aleatorio simple (1 de cada 4. Se tuvo en cuenta las siguientes variables: grupos de edad, sexo, manifestaciones clínicas, uso de cristales, tiempo de trabajo con la computadora, intervalo de reposo visual por hora de trabajo y evolución visual después de 3 meses del tratamiento. RESULTADOS: Predominó el sexo femenino (68,9 % con una edad media de 16,5 y los síntomas relevantes fueron la cefalea (82,2 % y fatiga ocular (75,5 %. Los pacientes que usaban cristales y que el tiempo de trabajo con el ordenador fue superior a 4 horas originó los síntomas visuales antes mencionado; así como la miopía dentro de las ametropías (70 % y los descansos visuales de 15-20 minutos mejoraron el complejo de síntomas (51,2 %. CONCLUSIONES: El síndrome de visión de la computadora constituye un problema de salud en este centro educacional, por lo que es importante realizar siempre un diagnóstico precoz debido a los efectos negativos que trae consigo en el adolescente, la escuela y la familia.OBJECTIVE: To characterize the clinical and epidemiological behavior of the computer vision syndrome in 10th grade students from «Rafael María de Mendive» high school in the period of September 2007 to June 2008 METHODS: A cross-sectional and descriptive study was conducted in which the universe of study was made up of all students of this educational level, who presented with clinical features derived from the computer use (183 patients.The final sample comprised 45 students

  1. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations

    International Nuclear Information System (INIS)

    Dubois, J.

    2011-01-01

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [fr

  2. Computer vision-based diameter maps to study fluoroscopic recordings of small intestinal motility from conscious experimental animals.

    Science.gov (United States)

    Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R

    2017-08-01

    When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.

  3. Sensory quality evaluation for appearance of needle-shaped green tea based on computer vision and nonlinear tools.

    Science.gov (United States)

    Dong, Chun-Wang; Zhu, Hong-Kai; Zhao, Jie-Wen; Jiang, Yong-Wen; Yuan, Hai-Bo; Chen, Quan-Sheng

    2017-06-01

    Tea is one of the three greatest beverages in the world. In China, green tea has the largest consumption, and needle-shaped green tea, such as Maofeng tea and Sparrow Tongue tea, accounts for more than 40% of green tea (Zhu et al., 2017). The appearance of green tea is one of the important indexes during the evaluation of green tea quality. Especially in market transactions, the price of tea is usually determined by its appearance (Zhou et al., 2012). Human sensory evaluation is usually conducted by experts, and is also easily affected by various factors such as light, experience, psychological and visual factors. In the meantime, people may distinguish the slight differences between similar colors or textures, but the specific levels of the tea are hard to determine (Chen et al., 2008). As human description of color and texture is qualitative, it is hard to evaluate the sensory quality accurately, in a standard manner, and objectively. Color is an important visual property of a computer image (Xie et al., 2014; Khulal et al., 2016); texture is a visual performance of image grayscale and color changing with spatial positions, which can be used to describe the roughness and directivity of the surface of an object (Sanaeifar et al., 2016). There are already researchers who have used computer visual image technologies to identify the varieties, levels, and origins of tea (Chen et al., 2008; Xie et al., 2014; Zhu et al., 2017). Most of their research targets are crush, tear, and curl (CTC) red (green) broken tea, curly green tea (Bilochun tea), and flat-typed green tea (West Lake Dragon-well green tea) as the information sources. However, the target of the above research is to establish a qualitative evaluation method on tea quality (Fu et al., 2013). There is little literature on the sensory evaluation of the appearance quality of needle-shaped green tea, especially research on a quantitative evaluation model (Zhou et al., 2012; Zhu et al., 2017).

  4. Vision Screening

    Science.gov (United States)

    ... an efficient and cost-effective method to identify children with visual impairment or eye conditions that are likely to lead ... main goal of vision screening is to identify children who have or are at ... visual impairment unless treated in early childhood. Other problems that ...

  5. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    Science.gov (United States)

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  6. Nondestructive measurement of total volatile basic nitrogen (TVB-N) in pork meat by integrating near infrared spectroscopy, computer vision and electronic nose techniques.

    Science.gov (United States)

    Huang, Lin; Zhao, Jiewen; Chen, Quansheng; Zhang, Yanhua

    2014-02-15

    Total volatile basic nitrogen (TVB-N) content is an important reference index for evaluating pork freshness. This paper attempted to measure TVB-N content in pork meat using integrating near infrared spectroscopy (NIRS), computer vision (CV), and electronic nose (E-nose) techniques. In the experiment, 90 pork samples with different freshness were collected for data acquisition by three different techniques, respectively. Then, the individual characteristic variables were extracted from each sensor. Next, principal component analysis (PCA) was used to achieve data fusion based on these characteristic variables from 3 different sensors data. Back-propagation artificial neural network (BP-ANN) was used to construct the model for TVB-N content prediction, and the top principal components (PCs) were extracted as the input of model. The result of the model was achieved as follows: the root mean square error of prediction (RMSEP) = 2.73 mg/100g and the determination coefficient (R(p)(2)) = 0.9527 in the prediction set. Compared with single technique, integrating three techniques, in this paper, has its own superiority. This work demonstrates that it has the potential in nondestructive detection of TVB-N content in pork meat using integrating NIRS, CV and E-nose, and data fusion from multi-technique could significantly improve TVB-N prediction performance. Copyright © 2013. Published by Elsevier Ltd.

  7. Interoperability Strategic Vision

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.; Narang, David; Martin, Maurice; Nordman, Bruce; Khandekar, Aditya; Hardy, Keith S.

    2018-02-28

    The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levels to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.

  8. Optics, illumination, and image sensing for machine vision II

    International Nuclear Information System (INIS)

    Svetkoff, D.J.

    1987-01-01

    These proceedings collect papers on the general subject of machine vision. Topics include illumination and viewing systems, x-ray imaging, automatic SMT inspection with x-ray vision, and 3-D sensing for machine vision

  9. Healthy Vision Tips

    Science.gov (United States)

    ... for Kids >> Healthy Vision Tips Listen All About Vision About the Eye Ask a Scientist Video Series ... Links to More Information Optical Illusions Printables Healthy Vision Tips Healthy vision starts with you! Use these ...

  10. Quantum wavepacket ab initio molecular dynamics: an approach for computing dynamically averaged vibrational spectra including critical nuclear quantum effects.

    Science.gov (United States)

    Sumner, Isaiah; Iyengar, Srinivasan S

    2007-10-18

    We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.

  11. Revision of Electro-Mechanical Drafting Program to Include CAD/D (Computer-Aided Drafting/Design). Final Report.

    Science.gov (United States)

    Snyder, Nancy V.

    North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…

  12. Aircraft cockpit vision: Math model

    Science.gov (United States)

    Bashir, J.; Singh, R. P.

    1975-01-01

    A mathematical model was developed to describe the field of vision of a pilot seated in an aircraft. Given the position and orientation of the aircraft, along with the geometrical configuration of its windows, and the location of an object, the model determines whether the object would be within the pilot's external vision envelope provided by the aircraft's windows. The computer program using this model was implemented and is described.

  13. MARR: active vision model

    Science.gov (United States)

    Podladchikova, Lubov N.; Gusakova, Valentina I.; Shaposhnikov, Dmitry G.; Faure, Alain; Golovan, Alexander V.; Shevtsova, Natalia A.

    1997-09-01

    Earlier, the biologically plausible active vision, model for multiresolutional attentional representation and recognition (MARR) has been developed. The model is based on the scanpath theory of Noton and Stark and provides invariant recognition of gray-level images. In the present paper, the algorithm of automatic image viewing trajectory formation in the MARR model, the results of psychophysical experiments, and possible applications of the model are considered. Algorithm of automatic image viewing trajectory formation is based on imitation of the scanpath formed by operator. Several propositions about possible mechanisms for a consecutive selection of fixation points in human visual perception inspired by computer simulation results and known psychophysical data have been tested and confirmed in our psychophysical experiments. In particular, we have found that gaze switch may be directed (1) to a peripheral part of the vision field which contains an edge oriented orthogonally to the edge in the point of fixation, and (2) to a peripheral part of the vision field containing crossing edges. Our experimental results have been used to optimize automatic algorithm of image viewing in the MARR model. The modified model demonstrates an ability to recognize complex real world images invariantly with respect to scale, shift, rotation, illumination conditions, and, in part, to point of view and can be used to solve some robot vision tasks.

  14. Early vision and focal attention

    Science.gov (United States)

    Julesz, Bela

    1991-07-01

    At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.

  15. Cartesian visions.

    Science.gov (United States)

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  16. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  17. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2017-06-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  18. Tectonic vision in architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    1999-01-01

    By introducing the concept; Tectonic Visions, The Dissertation discusses the interrelationship between the basic idea, the form principles, the choice of building technology and constructive structures within a given building. Includes Mies van der Rohe, Le Corbusier, Eames, Jorn Utzon, Louis Kahn...

  19. Tectonic vision in architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    1999-01-01

    By introducing the concept; Tectonic Visions, The Dissertation discusses the interrelationship between the basic idea, the form principles, the choice of building technology and constructive structures within a given building. Includes Mies van der Rohe, Le Corbusier, Eames, Jorn Utzon, Louis Kah...

  20. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    Science.gov (United States)

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  1. A computational method for designing diverse linear epitopes including citrullinated peptides with desired binding affinities to intravenous immunoglobulin.

    Science.gov (United States)

    Patro, Rob; Norel, Raquel; Prill, Robert J; Saez-Rodriguez, Julio; Lorenz, Peter; Steinbeck, Felix; Ziems, Bjoern; Luštrek, Mitja; Barbarini, Nicola; Tiengo, Alessandra; Bellazzi, Riccardo; Thiesen, Hans-Jürgen; Stolovitzky, Gustavo; Kingsford, Carl

    2016-04-08

    Understanding the interactions between antibodies and the linear epitopes that they recognize is an important task in the study of immunological diseases. We present a novel computational method for the design of linear epitopes of specified binding affinity to Intravenous Immunoglobulin (IVIg). We show that the method, called Pythia-design can accurately design peptides with both high-binding affinity and low binding affinity to IVIg. To show this, we experimentally constructed and tested the computationally constructed designs. We further show experimentally that these designed peptides are more accurate that those produced by a recent method for the same task. Pythia-design is based on combining random walks with an ensemble of probabilistic support vector machines (SVM) classifiers, and we show that it produces a diverse set of designed peptides, an important property to develop robust sets of candidates for construction. We show that by combining Pythia-design and the method of (PloS ONE 6(8):23616, 2011), we are able to produce an even more accurate collection of designed peptides. Analysis of the experimental validation of Pythia-design peptides indicates that binding of IVIg is favored by epitopes that contain trypthophan and cysteine. Our method, Pythia-design, is able to generate a diverse set of binding and non-binding peptides, and its designs have been experimentally shown to be accurate.

  2. A method for real-time measurement of respiratory rhythms in medaka (Oryzias latipes) using computer vision for water quality monitoring.

    Science.gov (United States)

    Zheng, Hongyuan; Liu, Rong; Zhang, Rong; Hu, Yanqing

    2014-02-01

    The respiratory rhythms of Japanese medaka is considered to be an efficient indicator for monitoring water quality since they are sensitive to chemicals and can be measured directly from the movement of fish gill tissue generated by their breathe. However, few methods have been established to measure the feature of small free-swimming fish intuitively. In this article, a method is proposed to measure the influence of the pollution to the Japanese medaka's respiratory rhythms with computer vision technology in real time. In order to get the images which contains the complete gill tissue remotely and steadily, a special object container and an experiment platform are designed. With the aim of capturing Japanese medaka's respiratory rhythms in real time, a set of image processing algorithms such as the color distribution table, Support Vector Machine (SVM), adaptive boosting (Adaboost) and mathematical morphology are applied. Then, in order to verify the effectiveness and accuracy of the whole method, fourteen groups of Japanese medakas are respectively exposed to copper ions solutions with different concentrations of 0, 0.1, 0.2, 0.3, 0.4, 0.5 and 0.6 mg/L for 48 h. The comparison between the human eyes observation and the above method indicates that the data obtained through the method is generally accurate. We found that the respiratory rate of Japanese medaka showed a downward trend initially when exposed in the copper ions solution, afterwards fluctuated repeatly arounding the lower rate, before death, the respiratory rate rised slowly for a while. With the increase of concentration, this trend will be more obvious. But the above phenomenon is absolutely different from that in the standard dilution water. Moreover, the two kinds of special respiratory rhythm of medakas poisoning were discovered. This method can be widely applied to study some toxic substances' effects on Japanese medaka's respiratory rhythms and to assess the degree of risk of the water

  3. Remotely Measuring Trash Fluxes in the Flood Canals of Megacities with Time Lapse Cameras and Computer Vision Algorithms - a Case Study from Jakarta, Indonesia.

    Science.gov (United States)

    Sedlar, F.; Turpin, E.; Kerkez, B.

    2014-12-01

    As megacities around the world continue to develop at breakneck speeds, future development, investment, and social wellbeing are threatened by a number of environmental and social factors. Chief among these is frequent, persistent, and unpredictable urban flooding. Jakarta, Indonesia with a population of 28 million, is a prime example of a city plagued by such flooding. Yet although Jakarta has ample hydraulic infrastructure already in place with more being constructed, the increasingly severity of the flooding it experiences is not from a lack of hydraulic infrastructure but rather a failure of existing infrastructure. As was demonstrated during the most recent floods in Jakarta, the infrastructure failure is often the result of excessive amounts of trash in the flood canals. This trash clogs pumps and reduces the overall system capacity. Despite this critical weakness of flood control in Jakarta, no data exists on the overall amount of trash in the flood canals, much less on how it varies temporally and spatially. The recent availability of low cost photography provides a means to obtain such data. Time lapse photography postprocessed with computer vision algorithms yields a low cost, remote, and automatic solution to measuring the trash fluxes. When combined with the measurement of key hydrological parameters, a thorough understanding of the relationship between trash fluxes and the hydrology of massive urban areas becomes possible. This work examines algorithm development, quantifying trash parameters, and hydrological measurements followed by data assimilation into existing hydraulic and hydrological models of Jakarta. The insights afforded from such an approach allows for more efficient operating of hydraulic infrastructure, knowledge of when and where critical levels of trash originate from, and the opportunity for community outreach - which is ultimately needed to reduce the trash in the flood canals of Jakarta and megacities around the world.

  4. Teaching of the Microbiological Analysis of Water Using a Computer Simulation Program That Includes Digitalized Color Images.

    Science.gov (United States)

    Fernandez, A.; And Others

    1992-01-01

    Describes microcomputer-based courseware designed for the simulation of the microbiological analysis of drinking water, including digitalized color images. The use of HyperCard is described and evaluation procedures are explained, including evaluation of the learning and of the class. The evaluation questionnaires are appended. (20 references)…

  5. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  6. Advanced topics in computer vision

    CERN Document Server

    Farinella, Giovanni Maria; Cipolla, Roberto

    2013-01-01

    This book presents a broad selection of cutting-edge research, covering both theoretical and practical aspects of reconstruction, registration, and recognition. The text provides an overview of challenging areas and descriptions of novel algorithms. Features: investigates visual features, trajectory features, and stereo matching; reviews the main challenges of semi-supervised object recognition, and a novel method for human action categorization; presents a framework for the visual localization of MAVs, and for the use of moment constraints in convex shape optimization; examines solutions to t

  7. Multistategy Learning for Computer Vision

    National Research Council Canada - National Science Library

    Bhanu, Bir

    1998-01-01

    .... With the goal of achieving robustness, our research at UCR is directed towards learning parameters, feedback, contexts, features, concepts, and strategies of IU algorithms for model-based object recognition...

  8. Computational Vision Based on Neurobiology

    Science.gov (United States)

    1994-08-10

    34 Journal of Personality and 71. M. Seibert and A.M. Waxman "Learning and Social Psychology, Vol. 37, pp. 2049-2058, 1979. recognizing 3D objects from...coherence. Nature. 358:412-414, 1992. 18. Petter, G. Nuove ricerche sperimentali sulla totalizzazione percettiva. Rivista di psicologia . 50: 213-227

  9. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  10. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  11. Vision as a Beachhead.

    Science.gov (United States)

    Heeger, David J; Behrmann, Marlene; Dinstein, Ilan

    2017-05-15

    When neural circuits develop abnormally due to different genetic deficits and/or environmental insults, neural computations and the behaviors that rely on them are altered. Computational theories that relate neural circuits with specific quantifiable behavioral and physiological phenomena, therefore, serve as extremely useful tools for elucidating the neuropathological mechanisms that underlie different disorders. The visual system is particularly well suited for characterizing differences in neural computations; computational theories of vision are well established, and empirical protocols for measuring the parameters of those theories are well developed. In this article, we examine how psychophysical and neuroimaging measurements from human subjects are being used to test hypotheses about abnormal neural computations in autism, with an emphasis on hypotheses regarding potential excitation/inhibition imbalances. We discuss the complexity of relating specific computational abnormalities to particular underlying mechanisms given the diversity of neural circuits that can generate the same computation, and we discuss areas of research in which computational theories need to be further developed to provide useful frameworks for interpreting existing results. A final emphasis is placed on the need to extend existing ideas into developmental frameworks that take into account the dramatic developmental changes in neurophysiology (e.g., changes in excitation/inhibition balance) that take place during the first years of life, when autism initially emerges. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. Machine Vision Giving Eyes to Robots. Resources in Technology.

    Science.gov (United States)

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  13. Tablet computers versus optical aids to support education and learning in children and young people with low vision: protocol for a pilot randomised controlled trial, CREATE (Children Reading with Electronic Assistance To Educate).

    Science.gov (United States)

    Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey; Dahlmann-Noor, Annegret

    2017-06-21

    Low vision and blindness adversely affect education and independence of children and young people. New 'assistive' technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10-18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. NCT02798848; IRAS ID 179658, UCL reference 15/0570. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Tablet computers versus optical aids to support education and learning in children and young people with low vision: protocol for a pilot randomised controlled trial, CREATE (Children Reading with Electronic Assistance To Educate)

    Science.gov (United States)

    Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey

    2017-01-01

    Introduction Low vision and blindness adversely affect education and independence of children and young people. New ‘assistive’ technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. Methods and analysis We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10–18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. Ethics and dissemination NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. Trial registration number NCT02798848; IRAS ID 179658, UCL reference 15/0570. PMID:28637740

  15. Machine vision for real time orbital operations

    Science.gov (United States)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  16. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  17. Vision Screening

    Science.gov (United States)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  18. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  19. Blindness and vision loss

    Science.gov (United States)

    ... life. Alternative Names Loss of vision; No light perception (NLP); Low vision; Vision loss and blindness Images ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...

  20. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  1. All Vision Impairment

    Science.gov (United States)

    ... Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for 2010 U.S. Age-Specific Prevalence ... Ethnicity 2010 Prevalence Rates of Vision Impairment by Race Table for 2010 Prevalence Rates of Vision Impairment ...

  2. Computer ethics: its birth and its future.

    OpenAIRE

    Bynum, Terrell Ward

    2001-01-01

    This paper discusses some “historical milestones” in computer ethics, as well as two alternative visions of the future of computer ethics. Topics include the impressive foundation for computer ethics laid down by Norbert Wiener in the 1940s and early 1950s; the pioneering efforts of Donn Parker, Joseph Weizenbaum and Walter Maner in the 1970s; Krystyna Gorniak's hypothesis that computer ethics will evolve into “global ethics”; and Deborah Johnson's speculation that computer ethics may someday...

  3. AI And Early Vision - Part II

    Science.gov (United States)

    Julesz, Bela

    1989-08-01

    A quarter of a century ago I introduced two paradigms into psychology which in the intervening years have had a direct impact on the psychobiology of early vision and an indirect one on artificial intelligence (AI or machine vision). The first, the computer-generated random-dot stereogram (RDS) paradigm (Julesz, 1960) at its very inception posed a strategic question both for AI and neurophysiology. The finding that stereoscopic depth perception (stereopsis) is possible without the many enigmatic cues of monocular form recognition - as assumed previously - demonstrated that stereopsis with its basic problem of finding matches between corresponding random aggregates of dots in the left and right visual fields became ripe for modeling. Indeed, the binocular matching problem of stereopsis opened up an entire field of study, eventually leading to the computational models of David Marr (1982) and his coworkers. The fusion of RDS had an even greater impact on neurophysiologists - including Hubel and Wiesel (1962) - who realized that stereopsis must occur at an early stage, and can be studied easier than form perception. This insight recently culminated in the studies by Gian Poggio (1984) who found binocular-disparity - tuned neurons in the input stage to the visual cortex (layer IVB in V1) in the monkey that were selectively triggered by dynamic RDS. Thus the first paradigm led to a strategic insight: that with stereoscopic vision there is no camouflage, and as such was advantageous for our primate ancestors to evolve the cortical machinery of stereoscopic vision to capture camouflaged prey (insects) at a standstill. Amazingly, although stereopsis evolved relatively late in primates, it captured the very input stages of the visual cortex. (For a detailed review, see Julesz, 1986a)

  4. Vision in water.

    Science.gov (United States)

    Atchison, David A; Valentine, Emma L; Gibson, Georgina; Thomas, Hannah R; Oh, Sera; Pyo, Young Ah; Lacherez, Philippe; Mathur, Ankit

    2013-09-06

    The purpose of this study is to determine visual performance in water, including the influence of pupil size. The water environment was simulated by placing goggles filled with saline in front of the eyes with apertures placed at the front of the goggles. Correction factors were determined for the different magnification under this condition in order to estimate vision in water. Experiments were conducted on letter visual acuity (seven participants), grating resolution (eight participants), and grating contrast sensitivity (one participant). For letter acuity, mean loss of vision in water, compared to corrected vision in air, varied between 1.1 log min of arc resolution (logMAR) for a 1 mm aperture to 2.2 logMAR for a 7 mm aperture. The vision in min of arc was described well by a linear relationship with pupil size. For grating acuity, mean loss varied between 1.1 logMAR for a 2 mm aperture to 1.2 logMAR for a 6 mm aperture. Contrast sensitivity for a 2 mm aperture deteriorated as spatial frequency increased with a 2 log unit loss by 3 c/°. Superimposed on this deterioration were depressions (notches) in sensitivity with the first three notches occurring at 0.45, 0.8, and 1.3 c/° with estimates for water of 0.39, 0.70, and 1.13 c/°. In conclusion, vision in water is poor. It becomes worse as pupil size increases, but the effects are much more marked for letter targets than for grating targets.

  5. Computer simulation of evolution and interaction dynamics of the vortex structures in fluids including atmosphere and hydrosphere

    Science.gov (United States)

    Belashov, Vasily

    We study numerically the interaction of the vortex structures in the continuum, and, specifically, in fluids and plasmas in two-dimensional approximation, when the Euler-type equations are applicable, namely: begin{center} e_{i}d_{t}x_{i}=d_{y}_{i}H/B, e_{i}d_{t}y_{i}=-d_{x}_{i}H/B, d_{m}=d/dm; d_{t}rho+vnablarho=0, v= - [z, nablapsi]/B; Deltapsi - f = - phi where e_{i} is the strength (circulation) of discrete vortex or the charge per unit length of the filaments, phi is a z-component of vorticity zeta or charge density rho, and psi is a stream function or potential for the two-dimensional flow of inviscid fluid and guiding-centre plasma, respectively, and H is a Hamiltonian. Note, that in the continuum (fluid) model B=1 in the Hamiltonian eqs. Function f=0 for the continuum or quasi-particles (filaments) with Coulomb interaction models, and f=k(2) psi for a screened Coulomb interaction model. We consider here only case f=0, and generalization of our approximation for f=k(2) psi is rather trivial. For numerical simulation we used the contour dynamics method, to some extent modified. We fulfilled a number of the series of numerical simulations for study of two-vortex inter-action, the interaction in the N-vortex systems, including interaction between the vortex structures and the dust particles, and also interaction of two three-dimensional plane-rotating vortex structures within the framework of many-layer model of medium, in dependence on some parameters: initial distance between vortices, value and sign of their vorticities, and spatial configuration of the vortex system. The results obtained showed that for all cases in dependence on initial conditions two regimes of the interaction can be observed, namely: weak interaction with quasi-stationary evolution and active interaction with the "phase intermixing", when the evolution can lead to formation of complex forms of vorticity regions. The theoretical explanation of the effects, which we observed, is given on

  6. Vision - Gateway to the brain

    CERN Multimedia

    1999-01-01

    Is the brain the result of (evolutionary) tinkering, or is it governed by natural law? How can we objectively know? What is the nature of consciousness? Vision research is spear-heading the quest and is making rapid progress with the help of new experimental, computational and theoretical tools. At the same time it is about to lead to important technical applications.

  7. Robotic Vision for Welding

    Science.gov (United States)

    Richardson, R. W.

    1986-01-01

    Vision system for robotic welder looks at weld along axis of welding electrode. Gives robot view of most of weld area, including yet-unwelded joint, weld pool, and completed weld bead. Protected within welding-torch body, lens and fiber bundle give robot closeup view of weld in progress. Relayed to video camera on robot manipulator frame, weld image provides data for automatic control of robot motion and welding parameters.

  8. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  9. Evolutionary replacement of UV vision by violet vision in fish

    Science.gov (United States)

    Tada, Takashi; Altun, Ahmet; Yokoyama, Shozo

    2009-01-01

    The vertebrate ancestor possessed ultraviolet (UV) vision and many species have retained it during evolution. Many other species switched to violet vision and, then again, some avian species switched back to UV vision. These UV and violet vision are mediated by short wavelength-sensitive (SWS1) pigments that absorb light maximally (λmax) at approximately 360 and 390–440 nm, respectively. It is not well understood why and how these functional changes have occurred. Here, we cloned the pigment of scabbardfish (Lepidopus fitchi) with a λmax of 423 nm, an example of violet-sensitive SWS1 pigment in fish. Mutagenesis experiments and quantum mechanical/molecular mechanical (QM/MM) computations show that the violet-sensitivity was achieved by the deletion of Phe-86 that converted the unprotonated Schiff base-linked 11-cis-retinal to a protonated form. The finding of a violet-sensitive SWS1 pigment in scabbardfish suggests that many other fish also have orthologous violet pigments. The isolation and comparison of such violet and UV pigments in fish living in different ecological habitats will open an unprecedented opportunity to elucidate not only the molecular basis of phenotypic adaptations, but also the genetics of UV and violet vision. PMID:19805066

  10. Experimental simulation of simultaneous vision.

    Science.gov (United States)

    de Gracia, Pablo; Dorronsoro, Carlos; Sánchez-González, Álvaro; Sawides, Lucie; Marcos, Susana

    2013-01-17

    To present and validate a prototype of an optical instrument that allows experimental simulation of pure bifocal vision. To evaluate the influence of different power additions on image contrast and visual acuity. The instrument provides the eye with two superimposed images, aligned and with the same magnification, but with different defocus states. Subjects looking through the instrument are able to experience pure simultaneous vision, with adjustable refractive correction and addition power. The instrument is used to investigate the impact of the amount of addition of an ideal bifocal simultaneous vision correction, both on image contrast and on visual performance. the instrument is validated through computer simulations of the letter contrast and by equivalent optical experiments with an artificial eye (camera). Visual acuity (VA) was measured in four subjects (AGE: 34.3 ± 3.4 years; spherical error: -2.1 ± 2.7 diopters [D]) for low and high contrast letters and different amounts of addition. The largest degradation in contrast and visual acuity (∼25%) occurred for additions around ±2 D, while additions of ±4 D produced lower degradation (14%). Low additions (1-2 D) result in lower VA than high additions (3-4 D). A simultaneous vision instrument is an excellent tool to simulate bifocal vision and to gain understanding of multifocal solutions for presbyopia. Simultaneous vision induces a pattern of visual performance degradation, which is well predicted by the degradation found in image quality. Neural effects, claimed to be crucial in the patients' tolerance of simultaneous vision, can be therefore compared with pure optical effects.

  11. 2020 Vision Project Summary

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, K.W.; Scott, K.P.

    2000-11-01

    Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceived computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.

  12. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  13. Increased generalization capability of trainable COSFIRE filters with application to machine vision

    NARCIS (Netherlands)

    Azzopardi, George; Fernandez-Robles, Laura; Alegre, Enrique; Petkov, Nicolai

    2017-01-01

    The recently proposed trainable COSFIRE filters are highly effective in a wide range of computer vision applications, including object recognition, image classification, contour detection and retinal vessel segmentation. A COSFIRE filter is selective for a collection of contour parts in a certain

  14. Vision problems

    Science.gov (United States)

    ... eye, which may be a sign of retinal detachment. Night blindness . Retinal detachment : symptoms include floaters, sparks, or flashes of light ... pupils, the back of your eye (called the retina), and eye pressure. An overall medical evaluation will ...

  15. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  16. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  17. Computational aspects of algebraic curves

    CERN Document Server

    Shaska, Tanush

    2005-01-01

    The development of new computational techniques and better computing power has made it possible to attack some classical problems of algebraic geometry. The main goal of this book is to highlight such computational techniques related to algebraic curves. The area of research in algebraic curves is receiving more interest not only from the mathematics community, but also from engineers and computer scientists, because of the importance of algebraic curves in applications including cryptography, coding theory, error-correcting codes, digital imaging, computer vision, and many more.This book cove

  18. Use of computational fluid dynamics codes for safety analysis of nuclear reactor systems, including containment. Summary report of a technical meeting

    International Nuclear Information System (INIS)

    2003-11-01

    Safety analysis is an important tool for justifying the safety of nuclear power plants. Typically, this type of analysis is performed by means of system computer codes with one dimensional approximation for modelling real plant systems. However, in the nuclear area there are issues for which traditional treatment using one dimensional system codes is considered inadequate for modelling local flow and heat transfer phenomena. There is therefore increasing interest in the application of three dimensional computational fluid dynamics (CFD) codes as a supplement to or in combination with system codes. There are a number of both commercial (general purpose) CFD codes as well as special codes for nuclear safety applications available. With further progress in safety analysis techniques, the increasing use of CFD codes for nuclear applications is expected. At present, the main objective with respect to CFD codes is generally to improve confidence in the available analysis tools and to achieve a more reliable approach to safety relevant issues. An exchange of views and experience can facilitate and speed up progress in the implementation of this objective. Both the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organisation for Economic Co-operation and Development (OECD/NEA) believed that it would be advantageous to provide a forum for such an exchange. Therefore, within the framework of the Working Group on the Analysis and Management of Accidents of the NEA's Committee on the Safety of Nuclear Installations, the IAEA and the NEA agreed to jointly organize the Technical Meeting on the Use of Computational Fluid Dynamics Codes for Safety Analysis of Reactor Systems, including Containment. The meeting was held in Pisa, Italy, from 11 to 14 November 2002. The publication constitutes the report of the Technical Meeting. It includes short summaries of the presentations that were made and of the discussions as well as conclusions and

  19. Reading aids for adults with low vision.

    Science.gov (United States)

    Virgili, Gianni; Acosta, Ruthy; Bentley, Sharon A; Giacomelli, Giovanni; Allcock, Claire; Evans, Jennifer R

    2018-04-17

    outcomes included reading duration and acuity, ease and frequency of use, quality of life and adverse outcomes. We graded the certainty of the evidence using GRADE. We included 11 small studies with a cross-over design (435 people overall), one study with two parallel arms (37 participants) and one study with three parallel arms (243 participants). These studies took place in the USA (7 studies), the UK (5 studies) and Canada (1 study). Age-related macular degeneration (AMD) was the most frequent cause of low vision, with 10 studies reporting 50% or more participants with the condition. Participants were aged 9 to 97 years in these studies, but most were older (the median average age across studies was 71 years). None of the studies were masked; otherwise we largely judged the studies to be at low risk of bias. All studies reported the primary outcome: results for reading speed. None of the studies measured or reported adverse outcomes.Reading speed may be higher with stand-mounted closed circuit television (CCTV) than with optical devices (stand or hand magnifiers) (low-certainty evidence, 2 studies, 92 participants). There was moderate-certainty evidence that reading duration was longer with the electronic devices and that they were easier to use. Similar results were seen for electronic devices with the camera mounted in a 'mouse'. Mixed results were seen for head-mounted devices with one study of 70 participants finding a mouse-based head-mounted device to be better than an optical device and another study of 20 participants finding optical devices better (low-certainty evidence). Low-certainty evidence from three studies (93 participants) suggested no important differences in reading speed, acuity or ease of use between stand-mounted and head-mounted electronic devices. Similarly, low-certainty evidence from one study of 100 participants suggested no important differences between a 9.7'' tablet computer and stand-mounted CCTV in reading speed, with imprecise estimates

  20. Cataract Vision Simulator

    Science.gov (United States)

    ... and Videos: What Do Cataracts Look Like? Cataract Vision Simulator Leer en Español: Simulador: Catarata Jun. 11, 2014 How do cataracts affect your vision? A cataract is a clouding of the eye's ...