WorldWideScience

Sample records for computer vision technique

  1. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  2. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  3. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  4. Jet-Images: Computer Vision Inspired Techniques for Jet Tagging

    CERN Document Server

    Cogan, Josh; Strauss, Emanuel; Schwarztman, Ariel

    2014-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon- initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  5. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  6. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  7. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M

    2014-01-01

    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  8. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  9. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  10. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  11. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  12. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  13. Localization System for a Mobile Robot Using Computer Vision Techniques

    Directory of Open Access Journals (Sweden)

    Rony Cruz Ramírez

    2012-05-01

    Full Text Available Mobile Robotics is a subject with multiple fields of action hence studies in this area are of vital importance. This paper describes the development of localization system for a mobile robot using Computer Vision. A webcam is placed at a height where the navigation environment can be seen. A LEGO NXT kit is used to build a wheeled mobile robot of differential drive configuration. The software is programmed in C++ using the functions library Open CV 2.0. this software then soft handles the webcam, does the processing of captured images, the calculation of the location, controls and communicates via Bluetooth. Also it implements a kinematic position control and performs several experiments to verify the reliability of the localization system. The results of one such experiment are described here.

  14. The use of computer vision techniques to augment home based sensorised environments.

    Science.gov (United States)

    Uhríková, Zdenka; Nugent, Chris D; Hlavác, Václav

    2008-01-01

    Technology within the home environment is becoming widely accepted as a means to facilitate independent living. Nevertheless, practical issues of detecting different tasks between multiple persons within the same environment along with managing instances of uncertainty associated with recorded sensor data are two key challenges yet to be fully solved. This work presents details of how computer vision techniques can be used as both alternative and complementary means in the assessment of behaviour in home based sensorised environments. Within our work we assessed the ability of vision processing techniques in conjunction with sensor based data to deal with instances of multiple occupancy. Our Results indicate that the inclusion of the video data improved the overall process of task identification by detecting and recognizing multiple people in the environment using color based tracking algorithm.

  15. An overview of computer vision

    Science.gov (United States)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  16. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  17. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  18. Learning in Computer Vision and Image Understanding

    OpenAIRE

    Greenspan, Hayit

    1994-01-01

    There is an increasing interest in the area of Learning in Computer Vision and Image Understanding, both from researchers in the learning community and from researchers involved with the computer vision world. The field is characterized by a shift away from the classical, purely model-based, computer vision techniques, towards data-driven learning paradigms for solving real-world vision problems.

  19. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques.

    Science.gov (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad

    2016-01-01

    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  20. A Model of an Expert Computer Vision and Recognition Facility with Applications of a Proportion Technique.

    Science.gov (United States)

    2014-09-26

    of research is being 14 function called WHATISFACE. [Rhodes][Tucker][ Hogg ][Sowa] The model offering the most specific information about structure and...1983. Hogg , D., "Model-based vision: a program to see a walking person", from "Image and Vision Computing", Vol. 1, No. 1, February 1983, pp. 5-20...Systems", Addison-Wesley Publishing Company, Inc., Massachusetts, 1983. Hogg , D., "Model-based vision: a program to see a walking person", from "Image

  1. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  2. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  3. Computer vision and color measurement techniques for inline monitoring of cheese curd syneresis.

    Science.gov (United States)

    Everard, C D; O'Callaghan, D J; Fagan, C C; O'Donnell, C P; Castillo, M; Payne, F A

    2007-07-01

    Optical characteristics of stirred curd were simultaneously monitored during syneresis in a 10-L cheese vat using computer vision and colorimetric measurements. Curd syneresis kinetic conditions were varied using 2 levels of milk pH (6.0 and 6.5) and 2 agitation speeds (12.1 and 27.2 rpm). Measured optical parameters were compared with gravimetric measurements of syneresis, taken simultaneously. The results showed that computer vision and colorimeter measurements have potential for monitoring syneresis. The 2 different phases, curd and whey, were distinguished by means of color differences. As syneresis progressed, the backscattered light became increasingly yellow in hue for circa 20 min for the higher stirring speed and circa 30 min for the lower stirring speed. Syneresis-related gravimetric measurements of importance to cheese making (e.g., curd moisture content, total solids in whey, and yield of whey) correlated significantly with computer vision and colorimetric measurements.

  4. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  5. CORROSION DETECTION USING A.I. : A COMPARISON OF STANDARD COMPUTER VISION TECHNIQUES AND DEEP LEARNING MODEL

    Directory of Open Access Journals (Sweden)

    Luca Petricca

    2016-05-01

    Full Text Available In this paper we present a comparison between standard computer vision techniques and Deep Learning approach for automatic metal corrosion (rust detection. For the classic approach, a classification based on the number of pixels containing specific red components has been utilized. The code written in Python used OpenCV libraries to compute and categorize the images. For the Deep Learning approach, we chose Caffe, a powerful framework developed at “Berkeley Vision and Learning Center” (BVLC. The test has been performed by classifying images and calculating the total accuracy for the two different approaches.

  6. Study on a New Technique of On-line Monitoring of Oil Contamination Level Using Computer Vision Technology

    Institute of Scientific and Technical Information of China (English)

    TU Qun-zhang; ZUO Hong-fu

    2004-01-01

    In this paper,a new technique of capturing the images of debris in lubrication or hydraulic oil using micro-imaging and computer vision techniques is introduced.By way of image processing,the size and distribution of debris are obtained,and then the oil contamination level is also obtained.Because the information of oil contamination is obtained directly from the images of debris by this method,the monitoring result is more intuitive and reliable.

  7. Embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2008-01-01

    Brings together experiences from researchers in the field of embedded computer vision, from both academic and industrial research centers, and covers a broad range of challenges and trade-offs brought about by this paradigm shift. This title offers emphasis on tackling important problems for society, safety, security, health, and mobility.

  8. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  9. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  10. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  11. Visual Behaviour Based Bio-Inspired Polarization Techniques in Computer Vision and Robotics

    OpenAIRE

    Shabayek, Abd El Rahman; Morel, Olivier; Fofi, David

    2012-01-01

    For long time, it was thought that the sensing of polarization by animals is invariably related to their behavior, such as navigation and orientation. Recently, it was found that polarization can be part of a high-level visual perception, permitting a wide area of vision applications. Polarization vision can be used for most tasks of color vision including object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. The polarization based visual beha...

  12. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available for extracting information from the raw sensor data. In Withey’s talk, methods suitable for computer vision in autonomous, mobile robots will be described and results from the application of these vision techniques are provided, specifically in a robot system...

  13. Machine vision is not computer vision

    Science.gov (United States)

    Batchelor, Bruce G.; Charlier, Jean-Ray

    1998-10-01

    The identity of Machine Vision as an academic and practical subject of study is asserted. In particular, the distinction between Machine Vision on the one hand and Computer Vision, Digital Image Processing, Pattern Recognition and Artificial Intelligence on the other is emphasized. The article demonstrates through four cases studies that the active involvement of a person who is sensitive to the broad aspects of vision system design can avoid disaster and can often achieve a successful machine that would not otherwise have been possible. This article is a transcript of the key- note address presented at the conference. Since the proceedings are prepared and printed before the conference, it is not possible to include a record of the response to this paper made by the delegates during the round-table discussion. It is hoped to collate and disseminate these via the World Wide Web after the event. (A link will be provided at http://bruce.cs.cf.ac.uk/bruce/index.html.).

  14. Fractographic classification in metallic materials by using 3D processing and computer vision techniques

    Directory of Open Access Journals (Sweden)

    Maria Ximena Bastidas-Rodríguez

    2016-09-01

    Full Text Available Failure analysis aims at collecting information about how and why a failure is produced. The first step in this process is a visual inspection on the flaw surface that will reveal the features, marks, and texture, which characterize each type of fracture. This is generally carried out by personnel with no experience that usually lack the knowledge to do it. This paper proposes a classification method for three kinds of fractures in crystalline materials: brittle, fatigue, and ductile. The method uses 3D vision, and it is expected to support failure analysis. The features used in this work were: i Haralick’s features and ii the fractal dimension. These features were applied to 3D images obtained from a confocal laser scanning microscopy Zeiss LSM 700. For the classification, we evaluated two classifiers: Artificial Neural Networks and Support Vector Machine. The performance evaluation was made by extracting four marginal relations from the confusion matrix: accuracy, sensitivity, specificity, and precision, plus three evaluation methods: Receiver Operating Characteristic space, the Individual Classification Success Index, and the Jaccard’s coefficient. Despite the classification percentage obtained by an expert is better than the one obtained with the algorithm, the algorithm achieves a classification percentage near or exceeding the 60 % accuracy for the analyzed failure modes. The results presented here provide a good approach to address future research on texture analysis using 3D data.

  15. Computer vision syndrome: a review.

    Science.gov (United States)

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  16. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  17. Application of Assistive Computer Vision Methods to Oyama Karate Techniques Recognition

    Directory of Open Access Journals (Sweden)

    Tomasz Hachaj

    2015-09-01

    Full Text Available In this paper we propose a novel algorithm that enables online actions segmentation and classification. The algorithm enables segmentation from an incoming motion capture (MoCap data stream, sport (or karate movement sequences that are later processed by classification algorithm. The segmentation is based on Gesture Description Language classifier that is trained with an unsupervised learning algorithm. The classification is performed by continuous density forward-only hidden Markov models (HMM classifier. Our methodology was evaluated on a unique dataset consisting of MoCap recordings of six Oyama karate martial artists including multiple champion of Kumite Knockdown Oyama karate. The dataset consists of 10 classes of actions and included dynamic actions of stands, kicks and blocking techniques. Total number of samples was 1236. We have examined several HMM classifiers with various number of hidden states and also Gaussian mixture model (GMM classifier to empirically find the best setup of the proposed method in our dataset. We have used leave-one-out cross validation. The recognition rate of our methodology differs between karate techniques and is in the range of 81% ± 15% even to 100%. Our method is not limited for this class of actions but can be easily adapted to any other MoCap-based actions. The description of our approach and its evaluation are the main contributions of this paper. The results presented in this paper are effects of pioneering research on online karate action classification.

  18. Machine Learning for Computer Vision

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2013-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and t...

  19. Inter- and intraspecific diversity in Cistus L. (Cistaceae) seeds, analysed with computer vision techniques.

    Science.gov (United States)

    Lo Bianco, M; Grillo, O; Cañadas, E; Venora, G; Bacchetta, G

    2017-03-01

    This work aims to discriminate among different species of the genus Cistus, using seed parameters and following the scientific plant names included as accepted in The Plant List. Also, the intraspecific phenotypic differentiation of C. creticus, through comparison with three subspecies (C. creticus subsp. creticus, C. c. subsp. eriocephalus and C. c. subsp. corsicus), as well as the interpopulation variability among five C. creticus subsp. eriocephalus populations was evaluated. Seed mean weight and 137 morphocolorimetric quantitative variables, describing shape, size, colour and textural seed traits, were measured using image analysis techniques. Measured data were analysed applying step-wise linear discriminant analysis. An overall cross-validated classification performance of 80.6% was recorded at species level. With regard to C. creticus, as case study, percentages of correct discrimination of 96.7% and 99.6% were achieved at intraspecific and interpopulation levels, respectively. In this classification model, the relevance of the colorimetric and textural descriptive features was highlighted, as well as the seed mean weight, which was the most discriminant feature at specific and intraspecific level. These achievements proved the ability of the image analysis system as highly diagnostic for systematic purposes and confirm that seeds in the genus Cistus have important diagnostic value. © 2016 German Botanical Society and The Royal Botanical Society of the Netherlands.

  20. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  1. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations suc

  2. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations

  3. The Computational Study of Vision.

    Science.gov (United States)

    1988-04-01

    provide only partial information about the 2-D velocity field, due to the aperture problem (Wallach, 1976; Fennema and Thompson, 1979; Burt and...computer vision studies and in biological models of motion measurement (for example, Lappin and Bell, 1976; Pantle and Picciano, 1976; Fennema and...830. Fennema , C. L., Thompson, W. B. 1979. Velocity determination in scenes containing several moving objects. Comput. Graph. Image Proc. 9:301-315

  4. Detecting Faults in Southern California using Computer-Vision Techniques and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Interferometry

    Science.gov (United States)

    Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.

    2013-12-01

    Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can

  5. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  6. Computer vision in microstructural analysis

    Science.gov (United States)

    Srinivasan, Malur N.; Massarweh, W.; Hough, C. L.

    1992-01-01

    The following is a laboratory experiment designed to be performed by advanced-high school and beginning-college students. It is hoped that this experiment will create an interest in and further understanding of materials science. The objective of this experiment is to demonstrate that the microstructure of engineered materials is affected by the processing conditions in manufacture, and that it is possible to characterize the microstructure using image analysis with a computer. The principle of computer vision will first be introduced followed by the description of the system developed at Texas A&M University. This in turn will be followed by the description of the experiment to obtain differences in microstructure and the characterization of the microstructure using computer vision.

  7. COMPUTER VISION SYNDROME: A SHORT REVIEW

    National Research Council Canada - National Science Library

    Sameena; Mohd Inayatullah

    2012-01-01

    .... The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS...

  8. Understanding and Preventing Computer Vision Syndrome

    OpenAIRE

    REDDY SC; LOH KY

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syn...

  9. Parallel Algorithms for Computer Vision.

    Science.gov (United States)

    1989-01-01

    developed algorithms for sev- stage at which they are used, for example by a eral early vision processes, such as edge detection, stere - navigation...system operates by receiving a stream of instructions from its front end computer. A microcontroller receives the instructions, expands each of them...instructions flow into the Connection Machine hardware from the front end. These I macro-instructions are sent to a microcontroller , which expands them

  10. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  11. Understanding and preventing computer vision syndrome.

    Science.gov (United States)

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  12. UNDERSTANDING AND PREVENTING COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    REDDY SC

    2008-01-01

    Full Text Available The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  13. Chapter 11. Quality evaluation of apple by computer vision

    Science.gov (United States)

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  14. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  15. Computer vision syndrome: A review.

    Science.gov (United States)

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  16. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  17. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  18. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    planning. The investigations in this thesis is done as initial work on a planning and logistic system for timber harvesting called logTracker. In this thesis we have focused on three methods for the logTracker project, which includes image segmentation, image classification, and image retrieval...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the logTracker project and ideas for further development of the system is provided. Building a complete logTracker system is a very demanding task and the conclusion is that it is important to focus on the elements that can bring most value to timber harvest planning. Besides contributing...

  19. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  20. Beam damage detection using computer vision technology

    Science.gov (United States)

    Shi, Jing; Xu, Xiangjun; Wang, Jialai; Li, Gong

    2010-09-01

    In this paper, a new approach for efficient damage detection in engineering structures is introduced. The key concept is to use the mature computer vision technology to capture the static deformation profile of a structure, and then employ profile analysis methods to detect the locations of the damages. By combining with wireless communication techniques, the proposed approach can provide an effective and economical solution for remote monitoring of structure health. Moreover, a preliminary experiment is conducted to verify the proposed concept. A commercial computer vision camera is used to capture the static deformation profiles of cracked cantilever beams under loading. The profiles are then processed to reveal the existence and location of the irregularities on the deformation profiles by applying fractal dimension, wavelet transform and roughness methods, respectively. The proposed concept is validated on both one-crack and two-crack cantilever beam-type specimens. It is also shown that all three methods can produce satisfactory results based on the profiles provided by the vision camera. In addition, the profile quality is the determining factor for the noise level in resultant detection signal.

  1. Computer vision in the poultry industry

    Science.gov (United States)

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  2. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  3. Scale-Space Theory in Computer Vision

    OpenAIRE

    1994-01-01

    A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over certain ranges of scale. "Scale-Space Theory in Computer Vision" describes a formal theory for representing the notion of scale in image data, and shows how this theory applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects rang...

  4. COMPUTER VISION SYNDROME: A SHORT REVIEW.

    OpenAIRE

    Sameena; Mohd Inayatullah

    2012-01-01

    Computers are probably one of the biggest scientific inventions of the modern era, and since then they have become an integral part of our life. The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS). CVS may have a significant impact not only on visual com fort but also occupational productivit...

  5. Biological Basis For Computer Vision: Some Perspectives

    Science.gov (United States)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  6. Object categorization: computer and human vision perspectives

    National Research Council Canada - National Science Library

    Dickinson, Sven J

    2009-01-01

    .... The result of a series of four highly successful workshops on the topic, the book gathers many of the most distinguished researchers from both computer and human vision to reflect on their experience...

  7. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  8. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  9. Computer Vision Method in Human Motion Detection

    Institute of Scientific and Technical Information of China (English)

    FU Li; FANG Shuai; XU Xin-he

    2007-01-01

    Human motion detection based on computer vision is a frontier research topic and is causing an increasing attention in the field of computer vision research. The wavelet transform is used to sharpen the ambiguous edges in human motion image. The shadow's effect to the image processing is also removed. The edge extraction can be successfully realized.This is an effective method for the research of human motion analysis system.

  10. Schlieren sequence analysis using computer vision

    Science.gov (United States)

    Smith, Nathanial Timothy

    Computer vision-based methods are proposed for extraction and measurement of flow structures of interest in schlieren video. As schlieren data has increased with faster frame rates, we are faced with thousands of images to analyze. This presents an opportunity to study global flow structures over time that may not be evident from surface measurements. A degree of automation is desirable to extract flow structures and features to give information on their behavior through the sequence. Using an interdisciplinary approach, the analysis of large schlieren data is recast as a computer vision problem. The double-cone schlieren sequence is used as a testbed for the methodology; it is unique in that it contains 5,000 images, complex phenomena, and is feature rich. Oblique structures such as shock waves and shear layers are common in schlieren images. A vision-based methodology is used to provide an estimate of oblique structure angles through the unsteady sequence. The methodology has been applied to a complex flowfield with multiple shocks. A converged detection success rate between 94% and 97% for these structures is obtained. The modified curvature scale space is used to define features at salient points on shock contours. A challenge in developing methods for feature extraction in schlieren images is the reconciliation of existing techniques with features of interest to an aerodynamicist. Domain-specific knowledge of physics must therefore be incorporated into the definition and detection phases. Known location and physically possible structure representations form a knowledge base that provides a unique feature definition and extraction. Model tip location and the motion of a shock intersection across several thousand frames are identified, localized, and tracked. Images are parsed into physically meaningful labels using segmentation. Using this representation, it is shown that in the double-cone flowfield, the dominant unsteady motion is associated with large scale

  11. Theories and Algorithms of Computational Vision

    Institute of Scientific and Technical Information of China (English)

    Ma Songde; Tan Tieniu; Hu Zhanyi; Jiang Tianzi; Lu Hanqing

    2005-01-01

    @@ Inspired by the recent progresses in the related fields such as cognitive psychology, neural physiology and neural anatomy, the project aims to put forward new computational theories and algorithms which could overcome the main shortcomings in the Marr's computational theory, a dominant paradigm for the last 20 years in computer vision field.

  12. QUALITY ASSESSMENT OF BISCUITS USING COMPUTER VISION

    Directory of Open Access Journals (Sweden)

    Archana A. Bade

    2016-08-01

    Full Text Available As the developments and customer expectations in the high quality foods are increasing day by day, it becomes very essential for the food industries to maintain the quality of the product. Therefore it is necessary to have the quality inspection system for the product before packaging. Automation in the industry gives better inspection speed as compared to the human vision. The automation based on the computer vision is cost effective, flexible and provides one of the best alternatives for more accurate, fast inspection system. Image processing and image analysis are the vital part of the computer vision system. In this paper, we discuss real time quality inspection of the biscuits of premium class using computer vision. It contains the designing of the system, implementing, verifying it and installation of the complete system at the biscuit industry. Overall system contains Image acquisition, Preprocessing, Important feature extraction using segmentation, Color variations and Interpretation and the system hardware.

  13. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  14. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  15. On computer vision in wireless sensor networks.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Ko, Teresa H.

    2004-09-01

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.

  16. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  17. Recognition of Mould Colony on Unhulled Paddy Based on Computer Vision using Conventional Machine-learning and Deep Learning Techniques

    Science.gov (United States)

    Sun, Ke; Wang, Zhengjie; Tu, Kang; Wang, Shaojin; Pan, Leiqing

    2016-11-01

    To investigate the potential of conventional and deep learning techniques to recognize the species and distribution of mould in unhulled paddy, samples were inoculated and cultivated with five species of mould, and sample images were captured. The mould recognition methods were built using support vector machine (SVM), back-propagation neural network (BPNN), convolutional neural network (CNN), and deep belief network (DBN) models. An accuracy rate of 100% was achieved by using the DBN model to identify the mould species in the sample images based on selected colour-histogram parameters, followed by the SVM and BPNN models. A pitch segmentation recognition method combined with different classification models was developed to recognize the mould colony areas in the image. The accuracy rates of the SVM and CNN models for pitch classification were approximately 90% and were higher than those of the BPNN and DBN models. The CNN and DBN models showed quicker calculation speeds for recognizing all of the pitches segmented from a single sample image. Finally, an efficient uniform CNN pitch classification model for all five types of sample images was built. This work compares multiple classification models and provides feasible recognition methods for mouldy unhulled paddy recognition.

  18. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  19. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  20. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  1. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  2. Computer vision syndrome (CVS) - Thermographic Analysis

    Science.gov (United States)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  3. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  4. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  5. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  6. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  7. Computational and cognitive neuroscience of vision

    CERN Document Server

    2017-01-01

    Despite a plethora of scientific literature devoted to vision research and the trend toward integrative research, the borders between disciplines remain a practical difficulty. To address this problem, this book provides a systematic and comprehensive overview of vision from various perspectives, ranging from neuroscience to cognition, and from computational principles to engineering developments. It is written by leading international researchers in the field, with an emphasis on linking multiple disciplines and the impact such synergy can lead to in terms of both scientific breakthroughs and technology innovations. It is aimed at active researchers and interested scientists and engineers in related fields.

  8. Bringing Vision-Based Measurements into our Daily Life: A Grand Challenge for Computer Vision Systems

    OpenAIRE

    Scharcanski, Jacob

    2016-01-01

    Bringing computer vision into our daily life has been challenging researchers in industry and in academia over the past decades. However, the continuous development of cameras and computing systems turned computer vision-based measurements into a viable option, allowing new solutions to known problems. In this context, computer vision is a generic tool that can be used to measure and monitor phenomena in wide range of fields. The idea of using vision-based measurements is appealing, since the...

  9. Robust level set method for computer vision

    Science.gov (United States)

    Si, Jia-rui; Li, Xiao-pei; Zhang, Hong-wei

    2005-12-01

    Level set method provides powerful numerical techniques for analyzing and solving interface evolution problems based on partial differential equations. It is particularly appropriate for image segmentation and other computer vision tasks. However, there exists noise in every image and the noise is the main obstacle to image segmentation. In level set method, the propagation fronts are apt to leak through the gaps at locations of missing or fuzzy boundaries that are caused by noise. The robust level set method proposed in this paper is based on the adaptive Gaussian filter. The fast marching method provides a fast implementation for level set method and the adaptive Gaussian filter can adapt itself to the local characteristics of an image by adjusting its variance. Thus, the different parts of an image can be smoothed in different way according to the degree of noisiness and the type of edges. Experiments results demonstrate that the adaptive Gaussian filter can greatly reduce the noise without distorting the image and made the level set methods more robust and accurate.

  10. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  11. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  12. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  13. Computer techniques for electromagnetics

    CERN Document Server

    Mittra, R

    1973-01-01

    Computer Techniques for Electromagnetics discusses the ways in which computer techniques solve practical problems in electromagnetics. It discusses the impact of the emergence of high-speed computers in the study of electromagnetics. This text provides a brief background on the approaches used by mathematical analysts in solving integral equations. It also demonstrates how to use computer techniques in computing current distribution, radar scattering, and waveguide discontinuities, and inverse scattering. This book will be useful for students looking for a comprehensive text on computer techni

  14. Optical correlator techniques applied to robotic vision

    Science.gov (United States)

    Hine, Butler P., III; Reid, Max B.; Downie, John D.

    1991-01-01

    Vision processing is one of the most computationally intensive tasks required of an autonomous robot. The data flow from a single typical imaging sensor is roughly 60 Mbits/sec, which can easily overload current on-board processors. Optical correlator-based processing can be used to perform many of the functions required of a general robotic vision system, such as object recognition, tracking, and orientation determination, and can perform these functions fast enough to keep pace with the incoming sensor data. We describe a hybrid digital electronic/analog optical robotic vision processing system developed at Ames Research Center to test concepts and algorithms for autonomous construction, inspection, and maintenance of space-based habitats. We discuss the system architecture design and implementation, its performance characteristics, and our future plans. In particular, we compare the performance of the system to a more conventional all digital electronic system developed concurrently. The hybrid system consistently outperforms the digital electronic one in both speed and robustness.

  15. JPL Robotics Laboratory computer vision software library

    Science.gov (United States)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  16. Machine Learning Techniques in Clinical Vision Sciences.

    Science.gov (United States)

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration

  17. Computer vision cracks the leaf code.

    Science.gov (United States)

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  18. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    Science.gov (United States)

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. Potato operation: computer vision for agricultural robotics

    Science.gov (United States)

    Pun, Thierry; Lefebvre, Marc; Gil, Sylvia; Brunet, Denis; Dessimoz, Jean-Daniel; Guegerli, Paul

    1992-03-01

    Each year at harvest time millions of seed potatoes are checked for the presence of viruses by means of an Elisa test. The Potato Operation aims at automatizing the potato manipulation and pulp sampling procedure, starting from bunches of harvested potatoes and ending with the deposit of potato pulp into Elisa containers. Automatizing these manipulations addresses several issues, linking robotic and computer vision. The paper reports on the current status of this project. It first summarizes the robotic aspects, which consist of locating a potato in a bunch, grasping it, positioning it into the camera field of view, pumping the pulp sample and depositing it into a container. The computer vision aspects are then detailed. They concern locating particular potatoes in a bunch and finding the position of the best germ where the drill has to sample the pulp. The emphasis is put on the germ location problem. A general overview of the approach is given, which combines the processing of both frontal and silhouette views of the potato, together with movements of the robot arm (active vision). Frontal and silhouette analysis algorithms are then presented. Results are shown that confirm the feasibility of the approach.

  1. Computer assisted audit techniques

    Directory of Open Access Journals (Sweden)

    Dražen Danić

    2008-12-01

    Full Text Available The purpose of this work is to point to the possibilities of more efficient auditing. In the encirclement of more and more intensive use of computer techniques that help to CAAT all the aims and the volume of auditing do not change when the audit is done in the computer-informatics environment. The computer assisted audit technique (CAATs can improve the efficiency and productivity of audit procedures. In the computerized information system, the CAATs are the ways in which an auditor can use computer to gather or as help in gathering auditing evidence. There are more reasons why the auditors apply computer techniques that help in auditing. Most often, they do it to achieve improvement of auditing efficiency when the data volume is large. It depends on several factors whether the auditors will apply the computer techniques that help auditing and to what degree respectively. If they do it, the most important are the computer knowledge, professional skill, experience of auditors, and availability of computer technique, and adequacy of computer supports, infeasibility of hand tests, efficiency and time limit. Through several examples from practice, we showed the possibilities of ACL as one of the CAAT tools.

  2. Computer vision technology in log volume inspection

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Log volume inspection is very important in forestry research and paper making engineering. This paper proposed a novel approach based on computer vision technology to cope with log volume inspection. The needed hardware system was analyzed and the details of the inspection algorithms were given. A fuzzy entropy based on image enhancement algorithm was presented for enhancing the image of the cross-section of log. In many practical applications the cross-section is often partially invisible, and this is the major obstacle for correct inspection. To solve this problem, a robust Hausdorff distance method was proposed to recover the whole cross-section. Experiment results showed that this method was efficient.

  3. Computer Vision Using Local Binary Patterns

    CERN Document Server

    Pietikainen, Matti; Zhao, Guoying; Ahonen, Timo

    2011-01-01

    The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, b

  4. Computer vision for microscopy diagnosis of malaria.

    Science.gov (United States)

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  5. Image Segmentation for Food Quality Evaluation Using Computer Vision System

    Directory of Open Access Journals (Sweden)

    Nandhini. P

    2014-02-01

    Full Text Available Quality evaluation is an important factor in food processing industries using the computer vision system where human inspection systems provide high variability. In many countries food processing industries aims at producing defect free food materials to the consumers. Human evaluation techniques suffer from high labour costs, inconsistency and variability. Thus this paper provides various steps for identifying defects in the food material using the computer vision systems. Various steps in computer vision system are image acquisition, Preprocessing, image segmentation, feature identification and classification. The proposed framework provides the comparison of various filters where the hybrid median filter was selected as the filter with the high PSNR value and is used in preprocessing. Image segmentation techniques such as Colour based binary Image segmentation, Particle swarm optimization are compared and image segmentation parameters such as accuracy, sensitivity , specificity are calculated and found that colour based binary image segmentation is well suited for food quality evaluation. Finally this paper provides an efficient method for identifying the defected parts in food materials.

  6. Computer Vision Approach for Low Cost, High Precision Measurement of Grapevine Trunk Diameter in Outdoor Conditions

    OpenAIRE

    Pérez, Diego Sebastián; Bromberg, Facundo; Antivilo, Francisco Gonzalez

    2014-01-01

    Trunk diameter is a variable of agricultural interest, used mainly in the prediction of fruit trees production. It is correlated with leaf area and biomass of trees, and consequently gives a good estimate of the potential production of the plants. This work presents a low cost, high precision method for the measurement of trunk diameter of grapevines based on Computer Vision techniques. Several methods based on Computer Vision and other techniques are introduced in the literature. These metho...

  7. A practical introduction to computer vision with OpenCV

    CERN Document Server

    Dawson-Howe, Kenneth

    2014-01-01

    Explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries Computer Vision is a rapidly expanding area and it is becoming progressively easier for developers to make use of this field due to the ready availability of high quality libraries (such as OpenCV 2).  This text is intended to facilitate the practical use of computer vision with the goal being to bridge the gap between the theory and the practical implementation of computer vision. The book will explain how to use the relevant OpenCV

  8. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Silvia Matiacevich

    2013-01-01

    Full Text Available Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days, temperature (4 and 15°C, and relative humidity (75 and 90%. Results. Significant differences (P<0.05 were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red and fungal presence (from 0 to 15%, both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements.

  9. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision.

    Science.gov (United States)

    Matiacevich, Silvia; Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: "Duke," "Brigitta", "Elliott", "Centurion", "Star," and "Jewel", measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0-21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements.

  10. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Document Server

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  11. Computer vision research at Marshall Space Flight Center

    Science.gov (United States)

    Vinz, Frank L.

    1990-01-01

    Orbital docking, inspection, and sevicing are operations which have the potential for capability enhancement as well as cost reduction for space operations by the application of computer vision technology. Research at MSFC has been a natural outgrowth of orbital docking simulations for remote manually controlled vehicles such as the Teleoperator Retrieval System and the Orbital Maneuvering Vehicle (OMV). Baseline design of the OMV dictates teleoperator control from a ground station. This necessitates a high data-rate communication network and results in several seconds of time delay. Operational costs and vehicle control difficulties could be alleviated by an autonomous or semi-autonomous control system onboard the OMV which would be based on a computer vision system having capability to recognize video images in real time. A concept under development at MSFC with these attributes is based on syntactic pattern recognition. It uses tree graphs for rapid recognition of binary images of known orbiting target vehicles. This technique and others being investigated at MSFC will be evaluated in realistic conditions by the use of MSFC orbital docking simulators. Computer vision is also being applied at MSFC as part of the supporting development for Work Package One of Space Station Freedom.

  12. Local spatial frequency analysis for computer vision

    Science.gov (United States)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  13. Computer vision research with new imaging technology

    Science.gov (United States)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  14. Topographic Mapping of Residual Vision by Computer

    Science.gov (United States)

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  15. Topographic Mapping of Residual Vision by Computer

    Science.gov (United States)

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  16. Non-Boolean computing with nanomagnets for computer vision applications

    Science.gov (United States)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  17. GROUP PROFILE Computer Technique

    Directory of Open Access Journals (Sweden)

    Andrey V. Sidorenkov

    2015-01-01

    Full Text Available This article contains a description of the structure, the software and functional capabilities, and the scope and purposes of application of the Group Profile (GP computer technique. This technique rests on a conceptual basis (the microgroup theory, includes 16 new and modified questionnaires, and a unique algorithm, tied to the questionnaires, for identification of informal groups. The GP yields a wide range of data about the group as a whole (47 indices, each informal group (43 indices, and each group member (16 indices. The GP technique can be used to study different types of groups: production (work groups, design teams, military units, etc., academic (school classes, student groups, and sports.

  18. Gesture Recognition by Computer Vision: An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  19. Gesture Recognition by Computer Vision: An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  20. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  1. A Framework for Generic State Estimation in Computer Vision Applications

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    2001-01-01

    Experimenting and building integrated, operational systems in computational vision poses both theoretical and practical challenges, involving methodologies from control theory, statistics, optimization, computer graphics, and interaction. Consequently, a control and communication structure is needed

  2. A Framework for Generic State Estimation in Computer Vision Applications

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    2001-01-01

    Experimenting and building integrated, operational systems in computational vision poses both theoretical and practical challenges, involving methodologies from control theory, statistics, optimization, computer graphics, and interaction. Consequently, a control and communication structure is needed

  3. On the performances of computer vision algorithms on mobile platforms

    Science.gov (United States)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  4. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  5. Computer vision for yarn microtension measurement.

    Science.gov (United States)

    Wang, Qing; Lu, Changhou; Huang, Ran; Pan, Wei; Li, Xueyong

    2016-03-20

    Yarn tension is an important parameter for assuring textile quality. In this paper, an optical method to measure microtension of moving yarn automatically in the winding system is proposed. The proposed method can measure microtension of the moving yarn by analyzing the captured images. With a line laser illuminating the moving yarn, a linear array CCD camera is used to capture the images. Design principles of yarn microtension measuring equipment based on computer vision are presented. A local border difference algorithm is used to search the upper border of the moving yarn as the characteristic line, and Fourier descriptors are used to filter the high-frequency noises caused by unevenness of the yarn diameter. Based on the average value of the characteristic line, the captured images were classified into sagging images and vibration images. The average value is considered a sag coordinate of the sagging images. The peak and trough coordinates of the vibration are obtained by change-point detection. Then, according to axially moving string and catenary theory, we obtain the microtension of the moving yarn. Experiments were performed and compared with a resistance strain sensor, and the results prove that the proposed method is effective and of high accuracy.

  6. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Schlessman Jason

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  7. Mahotas: Open source software for scriptable computer vision

    OpenAIRE

    Luis Pedro Coelho

    2013-01-01

    Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the s...

  8. Vision Trainer Teaches Focusing Techniques at Home

    Science.gov (United States)

    2015-01-01

    Based on work Stanford Research Institute did for Ames Research Center, Joseph Trachtman developed a vision trainer to treat visual focusing problems in the 1980s. In 2014, Trachtman, operating out of Seattle, released a home version of the device called the Zone-Trac. The inventor has found the biofeedback process used by the technology induces an alpha-wave brain state, causing increased hand-eye coordination and reaction times, among other effects

  9. COMPUTER VISION APPLIED IN THE PRECISION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer vision and its application in the precision control system are discussed. In the process of fabricating, the accuracy of the products should be controlled reasonably and completely. The precision should be kept and adjusted according to the information of feedback got from the measurement on-line or out-line in different procedures. Computer vision is one of the useful methods to do this. Computer vision and the image manipulation are presented, and based on this, a n-dimensional vector to appraise on precision of machining is given.

  10. Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision.

    Science.gov (United States)

    Kukelova, Zuzana; Bujnak, Martin; Pajdla, Tomas

    2012-07-01

    We present a method for solving systems of polynomial equations appearing in computer vision. This method is based on polynomial eigenvalue solvers and is more straightforward and easier to implement than the state-of-the-art Gröbner basis method since eigenvalue problems are well studied, easy to understand, and efficient and robust algorithms for solving these problems are available. We provide a characterization of problems that can be efficiently solved as polynomial eigenvalue problems (PEPs) and present a resultant-based method for transforming a system of polynomial equations to a polynomial eigenvalue problem. We propose techniques that can be used to reduce the size of the computed polynomial eigenvalue problems. To show the applicability of the proposed polynomial eigenvalue method, we present the polynomial eigenvalue solutions to several important minimal relative pose problems.

  11. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  12. Computer Assisted Audit Techniques

    Directory of Open Access Journals (Sweden)

    Eugenia Iancu

    2007-01-01

    Full Text Available From the modern point of view, audit takes intoaccount especially the information systems representingmainly the examination performed by a professional asregards the manner for developing an activity by means ofcomparing it to the quality criteria specific to this activity.Having as reference point this very general definition ofauditing, it must be emphasized that the best known segmentof auditing is the financial audit that had a parallel evolutionto the accountancy one.The present day phase of developing the financial audithas as main trait the internationalization of the accountantprofessional. World wide there are multinational companiesthat offer services in the financial auditing, taxing andconsultancy domain. The auditors, natural persons and auditcompanies, take part at the works of the national andinternational authorities for setting out norms in theaccountancy and auditing domain.The computer assisted audit techniques can be classified inseveral manners according to the approaches used by theauditor. The most well-known techniques are comprised inthe following categories: testing data techniques, integratedtest, parallel simulation, revising the program logics,programs developed upon request, generalized auditsoftware, utility programs and expert systems.

  13. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    Science.gov (United States)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  14. Use of Computer Vision to Detect Tangles in Tangled Objects

    OpenAIRE

    Parmar, Paritosh

    2014-01-01

    Untangling of structures like ropes and wires by autonomous robots can be useful in areas such as personal robotics, industries and electrical wiring & repairing by robots. This problem can be tackled by using computer vision system in robot. This paper proposes a computer vision based method for analyzing visual data acquired from camera for perceiving the overlap of wires, ropes, hoses i.e. detecting tangles. Information obtained after processing image according to the proposed method compr...

  15. Computer vision and laser scanner road environment perception

    OpenAIRE

    García, Fernando; Ponz Vila, Aurelio; Martín Gómez, David; Escalera, Arturo de la; Armingol, José M.

    2014-01-01

    Data fusion procedure is presented to enhance classical Advanced Driver Assistance Systems (ADAS). The novel vehicle safety approach, combines two classical sensors: computer vision and laser scanner. Laser scanner algorithm performs detection of vehicles and pedestrians based on pattern matching algorithms. Computer vision approach is based on Haar-Like features for vehicles and Histogram of Oriented Gradients (HOG) features for pedestrians. The high level fusion procedure uses Kalman Filter...

  16. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  17. Computer vision for dual spacecraft proximity operations -- A feasibility study

    Science.gov (United States)

    Stich, Melanie Katherine

    A computer vision-based navigation feasibility study consisting of two navigation algorithms is presented to determine whether computer vision can be used to safely navigate a small semi-autonomous inspection satellite in proximity to the International Space Station. Using stereoscopic image-sensors and computer vision, the relative attitude determination and the relative distance determination algorithms estimate the inspection satellite's relative position in relation to its host spacecraft. An algorithm needed to calibrate the stereo camera system is presented, and this calibration method is discussed. These relative navigation algorithms are tested in NASA Johnson Space Center's simulation software, Engineering Dynamic On-board Ubiquitous Graphics (DOUG) Graphics for Exploration (EDGE), using a rendered model of the International Space Station to serve as the host spacecraft. Both vision-based algorithms proved to attain successful results, and the recommended future work is discussed.

  18. Dynamic programming and graph algorithms in computer vision.

    Science.gov (United States)

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  19. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  20. 3D computer vision using Point Grey Research stereo vision cameras

    Institute of Scientific and Technical Information of China (English)

    Don Murray; Vlad Tucakov; WEI Xiong

    2008-01-01

    This paper provides an introduction to stereo vision systems designed by point grey research and describes the possible application of these types of systems. The paper presents an overview of stereo vision techniques and outlines the critical aspects of putting together a system that can perform in the real world. It also provides an overview of how the cameras can be used to facilitate stereo research.

  1. Wavelet applied to computer vision in astrophysics

    Science.gov (United States)

    Bijaoui, Albert; Slezak, Eric; Traina, Myriam

    2004-02-01

    Multiscale analyses can be provided by application wavelet transforms. For image processing purposes, we applied algorithms which imply a quasi isotropic vision. For a uniform noisy image, a wavelet coefficient W has a probability density function (PDF) p(W) which depends on the noise statistic. The PDF was determined for many statistical noises: Gauss, Poission, Rayleigh, exponential. For CCD observations, the Anscombe transform was generalized to a mixed Gasus+Poisson noise. From the discrete wavelet transform a set of significant wavelet coefficients (SSWC)is obtained. Many applications have been derived like denoising and deconvolution. Our main application is the decomposition of the image into objects, i.e the vision. At each scale an image labelling is performed in the SSWC. An interscale graph linking the fields of significant pixels is then obtained. The objects are identified using this graph. The wavelet coefficients of the tree related to a given object allow one to reconstruct its image by a classical inverse method. This vision model has been applied to astronomical images, improving the analysis of complex structures.

  2. Safety Computer Vision Rules for Improved Sensor Certification

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints....... The language allows developers to increase trustworthiness in the robot perception system, which we argue would increase compliance with safety standards. We demonstrate the usage of the language to improve reliability in a perception pipeline, thus allowing the vision expert to concisely express the safety...

  3. Application of chaos and fractals to computer vision

    CERN Document Server

    Farmer, Michael E

    2014-01-01

    This book provides a thorough investigation of the application of chaos theory and fractal analysis to computer vision. The field of chaos theory has been studied in dynamical physical systems, and has been very successful in providing computational models for very complex problems ranging from weather systems to neural pathway signal propagation. Computer vision researchers have derived motivation for their algorithms from biology and physics for many years as witnessed by the optical flow algorithm, the oscillator model underlying graphical cuts and of course neural networks. These algorithm

  4. Development of a wireless computer vision instrument to detect biotic stress in wheat.

    Science.gov (United States)

    Casanova, Joaquin J; O'Shaughnessy, Susan A; Evett, Steven R; Rush, Charles M

    2014-09-23

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  5. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm......The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  6. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant…

  7. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  8. DIKU-LASMEA Workshop on Computer Vision, Copenhagen, March, 2009

    DEFF Research Database (Denmark)

    Fihl, Preben

    This report will cover the participation in the DIKU-LASMEA Workshop on Computer Vision held at the department of computer science, University of Copenhagen, in March 2009. The report will give a concise description of the topics presented at the workshop, and briefly discuss how the work relates...

  9. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  10. DIKU-LASMEA Workshop on Computer Vision, Copenhagen, March, 2009

    DEFF Research Database (Denmark)

    Fihl, Preben

    This report will cover the participation in the DIKU-LASMEA Workshop on Computer Vision held at the department of computer science, University of Copenhagen, in March 2009. The report will give a concise description of the topics presented at the workshop, and briefly discuss how the work relates...

  11. Inspecting wood surface roughness using computer vision

    Science.gov (United States)

    Zhao, Xuezeng

    1995-01-01

    Wood surface roughness is one of the important indexes of manufactured wood products. This paper presents an attempt to develop a new method to evaluate manufactured wood surface roughness through the utilization of imaging processing and pattern recognition techniques. In this paper a collimated plane of light or a laser is directed onto the inspected wood surface at a sharp angle of incidence. An optics system that consists of lens focuses the image of the surface onto the objective of a CCD camera, the CCD camera captures the image of the surface and using a CA6300 board digitizes the image. The digitized image is transmitted into a microcomputer. Through the use of the methodology presented in this paper, the computer filters the noise and wood anatomical grain and gives an evaluation of the nature of the manufactured wood surface. The preliminary results indicated that the method has the advantages of non-contact, 3D, high-speed. This method can be used in classification and in- time measurement of manufactured wood products.

  12. Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.

    Science.gov (United States)

    Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar

    2016-05-01

    Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.

  13. Robot vision: obstacle-avoidance techniques for unmanned aerial vehicles

    NARCIS (Netherlands)

    Carloni, Raffaella; Lippiello, Vincenzo; D'auria, Massimo; Fumagalli, Matteo; Mersha, Abeje Y.; Stramigioli, Stefano; Sicilano, Bruno

    2013-01-01

    In this article, a vision-based technique for obstacle avoidance and target identification is combined with haptic feedback to develop a new teleoperated navigation system for underactuated aerial vehicles in unknown environments. A three-dimensional (3-D) map of the surrounding environment is built

  14. Performance evaluation of image enhancement techniques on night vision imagery

    NARCIS (Netherlands)

    Dijk, J.; Bijl, P.; Eekeren, W.M. van

    2010-01-01

    Recently new techniques for night-vision cameras are developed. Digital image-intensifiers are becoming available on the market. Also, so-called EMCCD (electro-magnified) cameras are developed, which can also record imagery in dim conditions. In this paper we present data recorded with both types of

  15. Robot vision: obstacle-avoidance techniques for unmanned aerial vehicles

    NARCIS (Netherlands)

    Carloni, Raffaella; Lippiello, Vincenzo; D'auria, Massimo; Fumagalli, Matteo; Mersha, A.Y.; Stramigioli, Stefano; Sicilano, Bruno

    2013-01-01

    In this article, a vision-based technique for obstacle avoidance and target identification is combined with haptic feedback to develop a new teleoperated navigation system for underactuated aerial vehicles in unknown environments. A three-dimensional (3-D) map of the surrounding environment is built

  16. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    Science.gov (United States)

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  17. A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for Mobile Industrial Robots

    OpenAIRE

    Mohammed Y Aalsalem; Wazir Zada Khan; Quratul Ain Arshad

    2012-01-01

    The field of robotic vision is developing rapidly. Robots can react intelligently and provide assistance to user activities through sentient computing. Since industrial applications pose complex requirements that cannot be handled by humans, an efficient low cost and robust technique is required for the tracking of mobile industrial robots. The existing sensor based techniques for mobile robot tracking are expensive and complex to deploy, configure and maintain. Also some of them demand dedic...

  18. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  19. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  20. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  1. OpenCV 3.0 computer vision with Java

    CERN Document Server

    Baggio, Daniel Lélis

    2015-01-01

    If you are a Java developer, student, researcher, or hobbyist wanting to create computer vision applications in Java then this book is for you. If you are an experienced C/C++ developer who is used to working with OpenCV, you will also find this book very useful for migrating your applications to Java. All you need is basic knowledge of Java, with no prior understanding of computer vision required, as this book will give you clear explanations and examples of the basics.

  2. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  3. Improvement of molecular techniques: A multidisciplinar vision

    Directory of Open Access Journals (Sweden)

    Bruno do Amaral Crispim

    2016-08-01

    Full Text Available The advances in molecular technologies since the discovery of the PCR (Polymerase Chain Reaction and their association with the use of molecular markers, allowed a rapid progress in the development of technologies and equipment able to generate and analyze data on a large scale, revolutionizing research that until recently was only based on single marker, like the analysis of Single Nucleotide Polymorphism (SNP, and nowadays with the genomic era is already possible in a few hours genotyping millions or even thousands of SNPs. This evolution has allowed improvements in research to the knowledge of genomes creating expectations and real possibilities of application of these techniques in various fields, from medicine to animal production. These new technologies of molecular analysis of DNA variability determining points of interest in chromosomes, which are technically called as molecular markers. These markers can be used in various applications, including paternity test, construction of genetic maps, mapping of quantitative inheritance of characteristics, isolation of genes, marker-assisted selection and characterization of the genetic diversity of different species. The improvement of sequencing and bioinformatics technologies were crucial to studies with characteristics of interest using high-density genetic information. The SNP genotyping panels stimulated researches in the human area, especially in studies of cancer and exoma, and also in agribusiness, aiming the search for superior genotypes for domestic plants and animals. The differential use of the panels is the possibility to seek complex characteristics, once the wide distribution of markers favors through the linkage disequilibrium, the identification of genomic regions associated with expression phenotypes in study. Therefore, this advance has become essential for greater accuracy and speed in molecular diagnostics, increasing the accuracy in the selection of individuals with

  4. A multidisciplinary approach to solving computer related vision problems.

    Science.gov (United States)

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  5. Photogrammetric computer vision statistics, geometry, orientation and reconstruction

    CERN Document Server

    Förstner, Wolfgang

    2016-01-01

    This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela­tions, tools that are useful also in the context of uncertain reasoning in po...

  6. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  7. Using Advanced Computer Vision Algorithms on Small Mobile Robots

    Science.gov (United States)

    2006-04-20

    this approach is the implementation of advanced computer vision algorithms on small mobile robots . We demonstrate the implementation and testing of the...following two algorithms useful on mobile robots : (1) object classification using a boosted Cascade of classifiers trained with the Adaboost training

  8. A Knowledge-Intensive Approach to Computer Vision Systems

    NARCIS (Netherlands)

    Koenderink-Ketelaars, N.J.J.P.

    2010-01-01

    This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing

  9. Information theory in computer vision and pattern recognition

    CERN Document Server

    Escolano, Francisco; Bonev, Boyan

    2009-01-01

    Researchers are bringing information theory elements to the computer vision and pattern recognition (CVPR) arena. Among these elements there are measures (entropy, mutual information), principles (maximum entropy, minimax entropy) and theories (rate distortion theory, method of types). This book explores the latter elements.

  10. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  11. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; OHARA, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  12. METHODOLOGY OF TECHNIQUE PREPARATION FOR LOW VISION JAVELIN THROWERS

    Directory of Open Access Journals (Sweden)

    Milan Matić

    2013-07-01

    Full Text Available Javelin throwing discipline for disabled people has been expanding couple of years back. In addition, world’s records have been improving year after year. The esential part in preparation of low vision javelin throwers is mastering the technique elements, crucial for acquiring better results. Method of theoretical analysis, decriptive and comparative methods of survey were applied. Relevant knowledge in the area of low vision javelin throwers was analyzed and systematized, and then interpretated theoretically and applied on the top javelin thrower, which served as a base for the inovative apporoach in methodology and praxis with disabled people. Due to visual impairment, the coordination and balance are challenged. This limitation practically makes the difference in methodology, explained in this article. Apart from the goals focused on improving the condition and results on competitions, more specialized goals should be considered, e.g. improving of orientation, balance and socialization process for the people who have low vision. Special approach used in the technique preparation brought the significant improvement in techique of our famous Paralympian Grlica Miloš. In addition to the technique improvement he acquired better results on the big competitions and a few worldwide valuable prizes were won. The area of ’sport for disabled people’ is not enough present in the praxis of sport’s workers. More articles and scientific surveys on this topic are needed for further work and results improvement with these kind of sportsmen.

  13. Enhanced computer vision with Microsoft Kinect sensor: a review.

    Science.gov (United States)

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  14. A Computer Vision Approach to Identify Einstein Rings and Arcs

    Science.gov (United States)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  15. Measurement of meat color using a computer vision system.

    Science.gov (United States)

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (Pcolors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (Pcolors the panelists found significant differences between them (Pcolor of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (Pcolor of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one.

  16. Mahotas: Open source software for scriptable computer vision

    Directory of Open Access Journals (Sweden)

    Luis Pedro Coelho

    2013-07-01

    Full Text Available Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the scientific software ecosystem in this language and can leverage the existing infrastructure developed in that language. Mahotas is released under a liberal open source license (MIT License and is available from http://github.com/luispedro/mahotas and from the Python Package Index (http://pypi.python.org/pypi/mahotas. Tutorials and full API documentation are available online at http://mahotas.readthedocs.org/.

  17. Volume Measurement in Solid Objects Using Artificial Vision Technique

    Science.gov (United States)

    Cordova-Fraga, T.; Martinez-Espinosa, J. C.; Bernal, J.; Huerta-Franco, R.; Sosa-Aquino, M.; Vargas-Luna, M.

    2004-09-01

    A simple system using artificial vision technique for measuring the volume of solid objects is described. The system is based on the acquisition of an image sequence of the object while it is rotating on an automated mechanism controlled by a PC. Volumes of different objects such as a sphere, a cylinder and also a carrot were measured. The proposed algorithm was developed in environment LabView 6.1. This technique can be very useful when it is applied to measure the human body for evaluating its body composition.

  18. Computer vision for real-time orbital operations. Center directors discretionary fund

    Science.gov (United States)

    Vinz, F. L.; Brewster, L. L.; Thomas, L. D.

    1984-01-01

    Machine vision research is examined as it relates to the NASA Space Station program and its associated Orbital Maneuvering Vehicle (OMV). Initial operation of OMV for orbital assembly, docking, and servicing are manually controlled from the ground by means of an on board TV camera. These orbital operations may be accomplished autonomously by machine vision techniques which use the TV camera as a sensing device. Classical machine vision techniques are described. An alternate method is developed and described which employs a syntactic pattern recognition scheme. It has the potential for substantial reduction of computing and data storage requirements in comparison to the Two-Dimensional Fast Fourier Transform (2D FFT) image analysis. The method embodies powerful heuristic pattern recognition capability by identifying image shapes such as elongation, symmetry, number of appendages, and the relative length of appendages.

  19. Extending Driving Vision Based on Image Mosaic Technique

    Directory of Open Access Journals (Sweden)

    Chen Deng

    2017-01-01

    Full Text Available Car cameras have been used extensively to assist driving by make driving visible. However, due to the limitation of the Angle of View (AoV, the dead zone still exists, which is a primary origin of car accidents. In this paper, we introduce a system to extend the vision of drivers to 360 degrees. Our system consists of four wide-angle cameras, which are mounted at different sides of a car. Although the AoV of each camera is within 180 degrees, relying on the image mosaic technique, our system can seamlessly integrate 4-channel videos into a panorama video. The panorama video enable drivers to observe everywhere around a car as far as three meters from a top view. We performed experiments in a laboratory environment. Preliminary results show that our system can eliminate vision dead zone completely. Additionally, the real-time performance of our system can satisfy requirements for practical use.

  20. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  1. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    Directory of Open Access Journals (Sweden)

    Joaquin J. Casanova

    2014-09-01

    Full Text Available Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM. In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV, vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32 than stressed wheat (111.34. In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014, as did the conventional camera (p < 0.0001. Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  2. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  3. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

    CERN Document Server

    Buyya, Rajkumar; Venugopal, Srikumar

    2008-01-01

    This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision.

  4. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  5. Former Food Products Safety Evaluation: Computer Vision as an Innovative Approach for the Packaging Remnants Detection

    Directory of Open Access Journals (Sweden)

    Marco Tretola

    2017-01-01

    Full Text Available Former food products (FFPs represent a way by which leftovers from the food industry (e.g., biscuits, bread, breakfast cereals, chocolate bars, pasta, savoury snacks, and sweets are converted into ingredients for the feed industry, thereby keeping food losses in the food chain. FFPs represent an alternative source of nutrients for animal feeding. However, beyond their nutritional value, the use of FFPs in animal feeding implies also safety issues, such as those related to the presence of packaging remnants. These contaminants might reside in FFP during food processing (e.g., collection, unpacking, mixing, grinding, and drying. Nowadays, artificial senses are widely used for the detection of foreign material in food and all of them involve computer vision. Computer vision technique provides detailed pixel-based characterizations of colours spectrum of food products, suitable for quality evaluation. The application of computer vision for a rapid qualitative screening of FFP’s safety features, in particular for the detection of packaging remnants, has been recently tested. This paper presents the basic principles, the advantages, and disadvantages of the computer vision method with an evaluation of its potential in the detection of packaging remnants in FFP.

  6. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  7. Computer vision syndrome and ergonomic practices among undergraduate university students.

    Science.gov (United States)

    Mowatt, Lizette; Gordon, Carron; Santosh, Arvind Babu Rajendra; Jones, Thaon

    2017-10-05

    To determine the prevalence of computer vision syndrome (CVS) and ergonomic practices among students in the Faculty of Medical Sciences at The University of the West Indies (UWI), Jamaica. A cross-sectional study was done with a self-administered questionnaire. Four hundred and nine students participated; 78% were females. The mean age was 21.6 years. Neck pain (75.1%), eye strain (67%), shoulder pain (65.5%) and eye burn (61.9%) were the most common CVS symptoms. Dry eyes (26.2%), double vision (28.9%) and blurred vision (51.6%) were the least commonly experienced symptoms. Eye burning (P = .001), eye strain (P = .041) and neck pain (P = .023) were significantly related to level of viewing. Moderate eye burning (55.1%) and double vision (56%) occurred in those who used handheld devices (P = .001 and .007, respectively). Moderate blurred vision was reported in 52% who looked down at the device compared with 14.8% who held it at an angle. Severe eye strain occurred in 63% of those who looked down at a device compared with 21% who kept the device at eye level. Shoulder pain was not related to pattern of use. Ocular symptoms and neck pain were less likely if the device was held just below eye level. There is a high prevalence of Symptoms of CVS amongst university students which could be reduced, in particular neck pain and eye strain and burning, with improved ergonomic practices. © 2017 John Wiley & Sons Ltd.

  8. A study of computer-related upper limb discomfort and computer vision syndrome.

    Science.gov (United States)

    Sen, A; Richardson, Stanley

    2007-12-01

    Personal computers are one of the commonest office tools in Malaysia today. Their usage, even for three hours per day, leads to a health risk of developing Occupational Overuse Syndrome (OOS), Computer Vision Syndrome (CVS), low back pain, tension headaches and psychosocial stress. The study was conducted to investigate how a multiethnic society in Malaysia is coping with these problems that are increasing at a phenomenal rate in the west. This study investigated computer usage, awareness of ergonomic modifications of computer furniture and peripherals, symptoms of CVS and risk of developing OOS. A cross-sectional questionnaire study of 136 computer users was conducted on a sample population of university students and office staff. A 'Modified Rapid Upper Limb Assessment (RULA) for office work' technique was used for evaluation of OOS. The prevalence of CVS was surveyed incorporating a 10-point scoring system for each of its various symptoms. It was found that many were using standard keyboard and mouse without any ergonomic modifications. Around 50% of those with some low back pain did not have an adjustable backrest. Many users had higher RULA scores of the wrist and neck suggesting increased risk of developing OOS, which needed further intervention. Many (64%) were using refractive corrections and still had high scores of CVS commonly including eye fatigue, headache and burning sensation. The increase of CVS scores (suggesting more subjective symptoms) correlated with increase in computer usage spells. It was concluded that further onsite studies are needed, to follow up this survey to decrease the risks of developing CVS and OOS amongst young computer users.

  9. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Science.gov (United States)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  10. PENGEMBANGAN COMPUTER VISION SYSTEM SEDERHANA UNTUK MENENTUKAN KUALITAS TOMAT Development of a simple Computer Vision System to determine tomato quality

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2012-05-01

    Full Text Available The purpose of this research was to develop a simple computer vision system (CVS to non-destructively measure tomato quality based on its Red Gren Blue (RGB color parameter. Tomato quality parameters measured were Brix, citric acid, vitamin C, and total sugar. This system consisted of a box to place object, a webcam to capture images, a computer to process images, illumination system, and an image analysis software which was equipped with artificial neural networks technique for determining tomato quality. Network architecture was formed with 3 layers consisting of1 input layer with 3 input neurons, 1 hidden layer with 14 neurons using logsig activation function, and 5 output layers using purelin activation function by using backpropagation training algorithm. CVS developed was able to predict the quality parameters of a Brix value, vitamin C, citric acid, and total sugar. To obtain the predicted values which were equal or close to the actual values, a calibration model was required. For Brix value, the actual value obtained from the equation y = 12,16x – 26,46, with x was Brix predicted. The actual values of vitamin C, citric acid, and total sugar were obtained from y = 1,09x - 3.13, y = 7,35x – 19,44,  and  y = 1.58x – 0,18,, with x was the value of vitamin C, citric acid, and total sugar, respectively. ABSTRAK Tujuan penelitian adalah mengembangkan computer vision system (CVS sederhana untuk menentukan kualitas tomat secara non­destruktif berdasarkan parameter warna Red Green Blue (RGB. Parameter kualitas tomat yang diukur ada­ lah Brix, asam sitrat, vitamin C, dan gula total. Sistem ini terdiri peralatan utama yaitu kotak untuk meletakkan obyek, webcam untuk menangkap citra, komputer untuk mengolah data, sistem penerangan, dan perangkat lunak analisis citra yang dilengkapi dengan jaringan syaraf tiruan untuk menentukan kualitas tomat. Arsitektur jaringan dibentuk dengan3 lapisan yang terdiri dari 1 lapisan masukan dengan 3 sel

  11. A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for Mobile Industrial Robots

    Directory of Open Access Journals (Sweden)

    Mohammed Y Aalsalem

    2012-07-01

    Full Text Available The field of robotic vision is developing rapidly. Robots can react intelligently and provide assistance to user activities through sentient computing. Since industrial applications pose complex requirements that cannot be handled by humans, an efficient low cost and robust technique is required for the tracking of mobile industrial robots. The existing sensor based techniques for mobile robot tracking are expensive and complex to deploy, configure and maintain. Also some of them demand dedicated and often expensive hardware. This paper presents a low cost vision based technique called “Hybrid Fiducial Mark Tracking” (HFMT technique for tracking mobile industrial robot. HFMT technique requires off-the-shelf hardware (CCD cameras and printable 2-D circular marks used as fiducials for tracking a mobile industrial robot on a pre-defined path. This proposed technique allows the robot to track on a predefined path by using fiducials for the detection of Right and Left turns on the path and White Strip for tracking the path. The HFMT technique is implemented and tested on an indoor mobile robot at our laboratory. Experimental results from robot navigating in real environments have confirmed that our approach is simple and robust and can be adopted in any hostile industrial environment where humans are unable to work.

  12. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  13. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  14. EFFICACY OF TRIPHALA GHRITA NETRATARPAN IN COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Deepak P. Sawant

    2013-04-01

    Full Text Available In present era, the computerization in a country is necessary for the progress. It seems that the work at computer is very intensive and most tiring. Computer Vision Syndrome (CVS is the complex condition of eye and vision problems that are related to near work which are experienced during or related to computer use. Traditional medicine has been practiced for many centuries in many parts of the world. The present study was undertaken to evaluate the effect of Triphala Ghrita Tarpan herbal compound preparation as per the classics in 30 patients suffering from CVS in trial group for 7 days in three consecutive months. The duration of Tarpana was 15-20 minutes. While the control group also included with 30 patients and were advised with certain eye exercise. The results in trial group were satisfactory and Tarpana was found to be effective in treating all the signs and symptoms of CVS which was supported by the statistical analysis (P<0.001.

  15. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  16. Displacement measurement system for inverters using computer micro-vision

    Science.gov (United States)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  17. Shape perception in human and computer vision an interdisciplinary perspective

    CERN Document Server

    Dickinson, Sven J

    2013-01-01

    This comprehensive and authoritative text/reference presents a unique, multidisciplinary perspective on Shape Perception in Human and Computer Vision. Rather than focusing purely on the state of the art, the book provides viewpoints from world-class researchers reflecting broadly on the issues that have shaped the field. Drawing upon many years of experience, each contributor discusses the trends followed and the progress made, in addition to identifying the major challenges that still lie ahead. Topics and features: examines each topic from a range of viewpoints, rather than promoting a speci

  18. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  19. Computer animation algorithms and techniques

    CERN Document Server

    Parent, Rick

    2012-01-01

    Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ

  20. Computer Vision Based Methods for Detection and Measurement of Psychophysiological Indicators

    DEFF Research Database (Denmark)

    Irani, Ramin

    2017-01-01

    expressions show that present facial expression recognition systems are not reliable for recognizing patients’ emotional states especially when they have difficulties with controlling their facial muscles. Regarding future research, the authors believe that the approaches proposed in this thesis may......Recently, computer vision technologies have been used for analysis of human facial video in order to provide a remotely indicator of some crucial psychophysiological parameters such as fatigue, pain, stress and hearthbeat rate. Available contact-based technologies are inconvenient for monitoring...... patients’ physiological signals due to irritating skin and require huge amount of wires to collect and transmitting the signals. While contact-free computer vision techniques not only can be an easy and economical way to overcome this issue, they provide an automatic recognition of the patients’ emotions...

  1. Neural networks and neuroscience-inspired computer vision.

    Science.gov (United States)

    Cox, David Daniel; Dean, Thomas

    2014-09-22

    Brains are, at a fundamental level, biological computing machines. They transform a torrent of complex and ambiguous sensory information into coherent thought and action, allowing an organism to perceive and model its environment, synthesize and make decisions from disparate streams of information, and adapt to a changing environment. Against this backdrop, it is perhaps not surprising that computer science, the science of building artificial computational systems, has long looked to biology for inspiration. However, while the opportunities for cross-pollination between neuroscience and computer science are great, the road to achieving brain-like algorithms has been long and rocky. Here, we review the historical connections between neuroscience and computer science, and we look forward to a new era of potential collaboration, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods. In particular, we explore where neuroscience-inspired algorithms have succeeded, where they still fail, and we identify areas where deeper connections are likely to be fruitful.

  2. Computer vision challenges and technologies for agile manufacturing

    Science.gov (United States)

    Molley, Perry A.

    1996-02-01

    applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  3. Computer vision and action recognition a guide for image processing and computer vision community for action understanding

    CERN Document Server

    Ahad, Md Atiqur Rahman

    2011-01-01

    Human action analyses and recognition are challenging problems due to large variations in human motion and appearance, camera viewpoint and environment settings. The field of action and activity representation and recognition is relatively old, yet not well-understood by the students and research community. Some important but common motion recognition problems are even now unsolved properly by the computer vision community. However, in the last decade, a number of good approaches are proposed and evaluated subsequently by many researchers. Among those methods, some methods get significant atte

  4. Computer vision uncovers predictors of physical urban change.

    Science.gov (United States)

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  5. The Pixhawk Open-Source Computer Vision Framework for Mavs

    Science.gov (United States)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  6. Computer Vision-Based Image Analysis of Bacteria.

    Science.gov (United States)

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  7. THE PIXHAWK OPEN-SOURCE COMPUTER VISION FRAMEWORK FOR MAVS

    Directory of Open Access Journals (Sweden)

    L. Meier

    2012-09-01

    Full Text Available Unmanned aerial vehicles (UAV and micro air vehicles (MAV are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  8. Heterogeneous compute in computer vision: OpenCL in OpenCV

    Science.gov (United States)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  9. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    CERN Document Server

    Fedkiw, R P

    2003-01-01

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision.

  10. Automated cutting in the food industry using computer vision

    KAUST Repository

    Daley, Wayne D R

    2012-01-01

    The processing of natural products has posed a significant problem to researchers and developers involved in the development of automation. The challenges have come from areas such as sensing, grasping and manipulation, as well as product-specific areas such as cutting and handling of meat products. Meat products are naturally variable and fixed automation is at its limit as far as its ability to accommodate these products. Intelligent automation systems (such as robots) are also challenged, mostly because of a lack of knowledge of the physical characteristic of the individual products. Machine vision has helped to address some of these shortcomings but underperforms in many situations. Developments in sensors, software and processing power are now offering capabilities that will help to make more of these problems tractable. In this chapter we will describe some of the developments that are underway in terms of computer vision for meat product applications, the problems they are addressing and potential future trends. © 2012 Woodhead Publishing Limited All rights reserved.

  11. Mechanical characterization of artificial muscles with computer vision

    Science.gov (United States)

    Verdu, R.; Morales-Sanchez, Juan; Fernandez-Romero, Antonio J.; Cortes, M. T.; Otero, Toribio F.; Weruaga-Prieto, Luis

    2002-07-01

    Conducting polymers are new materials that were developed in the late 1970s as intrinsically electronic conductors at the molecular level. The presence of polymer, solvent, and ionic components reminds one of the composition of the materials chosen by nature to produce muscles, neurons, and skin in living creatures. The ability to transform electrical energy into mechanical energy through an electrochemical reaction, promoting film swelling and shrinking during oxidation or reduction, respectively, produces a macroscopic change in its volume. On specially designed bi-layer polymeric stripes this conformational change gives rise to stripe curl and bending, where the position or angle of the free end of the polymeric stripe is directly related to the degree of oxidation, or charged consumed. Study of these curvature variations has been currently performed only in a manual basis. In this paper we propose a preliminary study of the polymeric muscle electromechanical properties by using a computer vision system. The vision system required is simple: it is composed of cameras for tracking the muscle from different angles and special algorithms, based on active contours, to analyse the deformable motion. Graphical results support the validity of this approach, which opens the way for performing automatic testing on artificial muscles with commercial purposes.

  12. A dangerous cocktail: databases, information techniques and lack of visions

    DEFF Research Database (Denmark)

    Tarp, Sven

    2017-01-01

    fully to the new technologies and get rid of old habits and ways of thinking. The article provides some examples of how the current challenges can be approached in terms of databases, user interfaces and other tools and techniques to assist the compilation and presentation of online dictionaries......This contribution discusses challenges to lexicography created by the new computer, information and communication technologies and techniques. It argues that the current transition period is full of paradoxes and that the main problem seems to be the subjective factor, i.e. the ability to adapt...

  13. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    Science.gov (United States)

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  15. Prediction of pork color attributes using computer vision system.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (Pcolor attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes.

  16. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  17. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    in active shape modeling of weeds for weed detection. Occlusion and overlapping leaves were main problems for this kind of work. Using 3D computer vision it was possible to separate overlapping crop leaves from weed leaves using the 3D information from the disparity maps. The results of the 3D......In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... for more detailed descriptions of the state of the crops analogous to the way humans evaluate crop health, i.e. by looking at the canopy structure and check for discolorations at specific locations on the plants. Previous research in 3D reconstruction methods based on cameras has focused on rigid...

  18. Computer vision analysis of image motion by variational methods

    CERN Document Server

    Mitiche, Amar

    2014-01-01

    This book presents a unified view of image motion analysis under the variational framework. Variational methods, rooted in physics and mechanics, but appearing in many other domains, such as statistics, control, and computer vision, address a problem from an optimization standpoint, i.e., they formulate it as the optimization of an objective function or functional. The methods of image motion analysis described in this book use the calculus of variations to minimize (or maximize) an objective functional which transcribes all of the constraints that characterize the desired motion variables. The book addresses the four core subjects of motion analysis: Motion estimation, detection, tracking, and three-dimensional interpretation. Each topic is covered in a dedicated chapter. The presentation is prefaced by an introductory chapter which discusses the purpose of motion analysis. Further, a chapter is included which gives the basic tools and formulae related to curvature, Euler Lagrange equations, unconstrained de...

  19. Codesign Environment for Computer Vision Hw/Sw Systems

    Science.gov (United States)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  20. COMPUTER VISION IN THE TEMPLES OF KARNAK: PAST, PRESENT & FUTURE

    Directory of Open Access Journals (Sweden)

    V. Tournadre

    2017-05-01

    Full Text Available CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  1. Computer Vision in the Temples of Karnak: Past, Present & Future

    Science.gov (United States)

    Tournadre, V.; Labarta, C.; Megard, P.; Garric, A.; Saubestre, E.; Durand, B.

    2017-05-01

    CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  2. State-Estimation Algorithm Based on Computer Vision

    Science.gov (United States)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  3. A shape representation for computer vision based on differential topology.

    Science.gov (United States)

    Blicher, A P

    1995-01-01

    We describe a shape representation for use in computer vision, after a brief review of shape representation and object recognition in general. Our shape representation is based on graph structures derived from level sets whose characteristics are understood from differential topology, particularly singularity theory. This leads to a representation which is both stable and whose changes under deformation are simple. The latter allows smoothing in the representation domain ('symbolic smoothing'), which in turn can be used for coarse-to-fine strategies, or as a discrete analog of scale space. Essentially the same representation applies to an object embedded in 3-dimensional space as to one in the plane, and likewise for a 3D object and its silhouette. We suggest how this can be used for recognition.

  4. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  5. Computer Vision Aided Measurement of Morphological Features in Medical Optics

    Directory of Open Access Journals (Sweden)

    Bogdana Bologa

    2010-09-01

    Full Text Available This paper presents a computer vision aided method for non invasive interupupillary (IPD distance measurement. IPD is a morphological feature requirement in any oftalmological frame prescription. A good frame prescription is highly dependent nowadays on accurate IPD estimation in order for the lenses to be eye strain free. The idea is to replace the ruler or the pupilometer with a more accurate method while keeping the patient eye free from any moving or gaze restrictions. The method proposed in this paper uses a video camera and a punctual light source in order to determine the IPD with under millimeter error. The results are compared against standard eye and object detection routines from literature.

  6. Identification of cichlid fishes from Lake Malawi using computer vision.

    Directory of Open Access Journals (Sweden)

    Deokjin Joo

    Full Text Available BACKGROUND: The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. METHODOLOGY/PRINCIPAL FINDING: Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. CONCLUSIONS: Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species.

  7. Screening for diabetic retinopathy using computer vision and physiological markers.

    Science.gov (United States)

    Hann, Christopher E; Revie, James A; Hewett, Darren; Chase, J Geoffrey; Shaw, Geoffrey M

    2009-07-01

    Hyperglycemia and diabetes result in vascular complications, most notably diabetic retinopathy (DR). The prevalence of DR is growing and is a leading cause of blindness and/or visual impairment in developed countries. Current methods of detecting, screening, and monitoring DR are based on subjective human evaluation, which is also slow and time-consuming. As a result, initiation and progress monitoring of DR is clinically hard. Computer vision methods are developed to isolate and detect two of the most common DR dysfunctions-dot hemorrhages (DH) and exudates. The algorithms use specific color channels and segmentation methods to separate these DR manifestations from physiological features in digital fundus images. The algorithms are tested on the first 100 images from a published database. The diagnostic outcome and the resulting positive and negative prediction values (PPV and NPV) are reported. The first 50 images are marked with specialist determined ground truth for each individual exudate and/or DH, which are also compared to algorithm identification. Exudate identification had 96.7% sensitivity and 94.9% specificity for diagnosis (PPV = 97%, NPV = 95%). Dot hemorrhage identification had 98.7% sensitivity and 100% specificity (PPV = 100%, NPV = 96%). Greater than 95% of ground truth identified exudates, and DHs were found by the algorithm in the marked first 50 images, with less than 0.5% false positives. A direct computer vision approach enabled high-quality identification of exudates and DHs in an independent data set of fundus images. The methods are readily generalizable to other clinical manifestations of DR. The results justify a blinded clinical trial of the system to prove its capability to detect, diagnose, and, over the long term, monitor the state of DR in individuals with diabetes. Copyright 2009 Diabetes Technology Society.

  8. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    Science.gov (United States)

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  9. Computer vision for robots; Proceedings of the Meeting, Cannes, France, December 2-6, 1985

    Science.gov (United States)

    Faugeras, O. D. (Editor); Kelley, R. B. (Editor)

    1986-01-01

    The conference presents papers on segmentation techniques, three-dimensional recognition and representation, processing image sequences, and navigation and mobility. Particular attention is given to determining the pose of an object, adaptive least squares correlation with geometrical constraints, and the reliable formation of feature vectors for two-dimensional shape representation. Other topics include the real-time tracking of a target moving on a natural textured background, computer vision for the guidance of roving robots, and integrating sensory data for object recognition tasks.

  10. Computer vision for robots; Proceedings of the Meeting, Cannes, France, December 2-6, 1985

    Science.gov (United States)

    Faugeras, O. D. (Editor); Kelley, R. B. (Editor)

    1986-01-01

    The conference presents papers on segmentation techniques, three-dimensional recognition and representation, processing image sequences, and navigation and mobility. Particular attention is given to determining the pose of an object, adaptive least squares correlation with geometrical constraints, and the reliable formation of feature vectors for two-dimensional shape representation. Other topics include the real-time tracking of a target moving on a natural textured background, computer vision for the guidance of roving robots, and integrating sensory data for object recognition tasks.

  11. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  12. Computer vision for foreign body detection and removal in the food industry

    Science.gov (United States)

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  13. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    Science.gov (United States)

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  14. Application of Machine Vision Technique in Weed Identification

    Institute of Scientific and Technical Information of China (English)

    LIU Zhen-heng; ZHANG Chang-li; FANG Jun-long

    2004-01-01

    This paper mainly introduces some foreign research methods and fruits about weed identification by applying machine vision. This facet researches is lack in our country, this paper could be reference for domestic studies about weed identification.

  15. The face of an imposter: computer vision for deception detection research in progress

    NARCIS (Netherlands)

    Elkins, Aaron C.; Sun, Yijia; Zafeiriou, Stefanos; Pantic, Maja

    2013-01-01

    Using video analyzed from a novel deception experiment, this paper introduces computer vision research in progress that addresses two critical components to computational modeling of deceptive behavior: 1) individual nonverbal behavior differences, and 2) deceptive ground truth. Video interviews ana

  16. Blink rate, incomplete blinks and computer vision syndrome.

    Science.gov (United States)

    Portello, Joan K; Rosenfield, Mark; Chu, Christina A

    2013-05-01

    Computer vision syndrome (CVS), a highly prevalent condition, is frequently associated with dry eye disorders. Furthermore, a reduced blink rate has been observed during computer use. The present study examined whether post task ocular and visual symptoms are associated with either a decreased blink rate or a higher prevalence of incomplete blinks. An additional trial tested whether increasing the blink rate would reduce CVS symptoms. Subjects (N = 21) were required to perform a continuous 15-minute reading task on a desktop computer at a viewing distance of 50 cm. Subjects were videotaped during the task to determine their blink rate and amplitude. Immediately after the task, subjects completed a questionnaire regarding ocular symptoms experienced during the trial. In a second session, the blink rate was increased by means of an audible tone that sounded every 4 seconds, with subjects being instructed to blink on hearing the tone. The mean blink rate during the task without the audible tone was 11.6 blinks per minute (SD, 7.84). The percentage of blinks deemed incomplete for each subject ranged from 0.9 to 56.5%, with a mean of 16.1% (SD, 15.7). A significant positive correlation was observed between the total symptom score and the percentage of incomplete blinks during the task (p = 0.002). Furthermore, a significant negative correlation was noted between the blink score and symptoms (p = 0.035). Increasing the mean blink rate to 23.5 blinks per minute by means of the audible tone did not produce a significant change in the symptom score. Whereas CVS symptoms are associated with a reduced blink rate, the completeness of the blink may be equally significant. Because instructing a patient to increase his or her blink rate may be ineffective or impractical, actions to achieve complete corneal coverage during blinking may be more helpful in alleviating symptoms during computer operation.

  17. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  18. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  19. Atoms of recognition in human and computer vision.

    Science.gov (United States)

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-01

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  20. A computer vision based candidate for functional balance test.

    Science.gov (United States)

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  1. Selection of Norway spruce somatic embryos by computer vision

    Science.gov (United States)

    Hamalainen, Jari J.; Jokinen, Kari J.

    1993-05-01

    A computer vision system was developed for the classification of plant somatic embryos. The embryos are in a Petri dish that is transferred with constant speed and they are recognized as they pass a line scan camera. A classification algorithm needs to be installed for every plant species. This paper describes an algorithm for the recognition of Norway spruce (Picea abies) embryos. A short review of conifer micropropagation by somatic embryogenesis is also given. The recognition algorithm is based on features calculated from the boundary of the object. Only part of the boundary corresponding to the developing cotyledons (2 - 15) and the straight sides of the embryo are used for recognition. An index of the length of the cotyledons describes the developmental stage of the embryo. The testing set for classifier performance consisted of 118 embryos and 478 nonembryos. With the classification tolerances chosen 69% of the objects classified as embryos by a human classifier were selected and 31$% rejected. Less than 1% of the nonembryos were classified as embryos. The basic features developed can probably be easily adapted for the recognition of other conifer somatic embryos.

  2. Computer Vision-Based Portable System for Nitroaromatics Discrimination

    Directory of Open Access Journals (Sweden)

    Nuria López-Ruiz

    2016-01-01

    Full Text Available A computer vision-based portable measurement system is presented in this report. The system is based on a compact reader unit composed of a microcamera and a Raspberry Pi board as control unit. This reader can acquire and process images of a sensor array formed by four nonselective sensing chemistries. Processing these array images it is possible to identify and quantify eight different nitroaromatic compounds (both explosives and related compounds by using chromatic coordinates of a color space. The system is also capable of sending the obtained information after the processing by a WiFi link to a smartphone in order to present the analysis result to the final user. The identification and quantification algorithm programmed in the Raspberry board is easy and quick enough to allow real time analysis. Nitroaromatic compounds analyzed in the range of mg/L were picric acid, 2,4-dinitrotoluene (2,4-DNT, 1,3-dinitrobenzene (1,3-DNB, 3,5-dinitrobenzonitrile (3,5-DNBN, 2-chloro-3,5-dinitrobenzotrifluoride (2-C-3,5-DNBF, 1,3,5-trinitrobenzene (TNB, 2,4,6-trinitrotoluene (TNT, and tetryl (TT.

  3. Traffic light detection and intersection crossing using mobile computer vision

    Science.gov (United States)

    Grewei, Lynne; Lagali, Christopher

    2017-05-01

    The solution for Intersection Detection and Crossing to support the development of blindBike an assisted biking system for the visually impaired is discussed. Traffic light detection and intersection crossing are key needs in the task of biking. These problems are tackled through the use of mobile computer vision, in the form of a mobile application on an Android phone. This research builds on previous Traffic Light detection algorithms with a focus on efficiency and compatibility on a resource-limited platform. Light detection is achieved through blob detection algorithms utilizing training data to detect patterns of Red, Green and Yellow in complex real world scenarios where multiple lights may be present. Also, issues of obscurity and scale are addressed. Safe Intersection crossing in blindBike is also discussed. This module takes a conservative "assistive" technology approach. To achieve this blindBike use's not only the Android device but, an external bike cadence Bluetooth/Ant enabled sensor. Real world testing results are given and future work is discussed.

  4. Computer Vision Malaria Diagnostic Systems—Progress and Prospects

    Directory of Open Access Journals (Sweden)

    Joseph Joel Pollak

    2017-08-01

    Full Text Available Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  5. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  6. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  7. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Directory of Open Access Journals (Sweden)

    Anyela Camargo

    Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  8. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Science.gov (United States)

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  9. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L.P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  10. A CLINICAL STUDY TO EVALUATE THE ROLE OF AKSHITARPANA, SHIRODHARA AND AN AYURVEDIC COMPOUND IN CHILDHOOD COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Singh Omendra Pal

    2011-03-01

    Full Text Available Computer vision syndrome is one among the lifestyle disorders in children. About 88% of people who use computers everyday suffer from this problem and children are no exception. Computer Vision Syndrome (CVS is the complex of eye and vision problems related to near works which are experienced during the use of Video Display Terminals (TV and computers. Therefore, considering these prospects a randomized double blind placebo control study was conducted among 40 clinically diagnosed children (5-15 years age group of computer Vision Syndrome to evaluate the role of akshitarpana, shirodhara and an ayurvedic compound in childhood computer vision syndrome.

  11. People Recognition for Loja ECU911 applying artificial vision techniques

    Directory of Open Access Journals (Sweden)

    Diego Cale

    2016-05-01

    Full Text Available This article presents a technological proposal based on artificial vision which aims to search people in an intelligent way by using IP video cameras. Currently, manual searching process is time and resource demanding in contrast to automated searching one, which means that it could be replaced. In order to obtain optimal results, three different techniques of artificial vision were analyzed (Eigenfaces, Fisherfaces, Local Binary Patterns Histograms. The selection process considered factors like lighting changes, image quality and changes in the angle of focus of the camera. Besides, a literature review was conducted to evaluate several points of view regarding artificial vision techniques.

  12. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  13. Learning openCV computer vision with the openCV library

    CERN Document Server

    Bradski, Gary

    2008-01-01

    Learning OpenCV puts you right in the middle of the rapidly expanding field of computer vision. Written by the creators of OpenCV, the widely used free open-source library, this book introduces you to computer vision and demonstrates how you can quickly build applications that enable computers to see" and make decisions based on the data. With this book, any developer or hobbyist can get up and running with the framework quickly, whether it's to build simple or sophisticated vision applications

  14. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  15. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    Science.gov (United States)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  16. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10(-6) by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  17. Furnance grate monitoring by computer vision; Rosteroevervakning med bildanalys

    Energy Technology Data Exchange (ETDEWEB)

    Blom, Elisabet; Gustafsson, Bengt; Olsson, Magnus

    2005-01-01

    During the last couple of year's computer vision has developed a lot beside computers and video technic. This makes it technical and economical possible to use cameras as a monitoring instrument. The first experiments with this type of equipment were made in the early 1990s. Most of the experiments were made to measure the bed length from the back of the grate. In this experiment the cameras were mounted in the front instead. The highest priority was to detect the topography of the fuel bed. An uneven fuel bed means combustion with local temperature variations that do the combustion more difficult to control. The goal was to show possibilities to measure fuel bed highs, particle size and combustion intensity or the combustion spreading with pictures from one or two cameras. The test was done in a bark-fuelled boiler in Karlsborg because that boiler has doors from the fuel feeding side suitable for looking down on the grate. The results shows that the cameras mounting that were done in Karlsborg were not good enough to do a 3D calculation of the fuel bed. It was however possible to se the drying and it was possible to see the flames in the pictures. To see the flames and steam without over exposure because of different light in different points, it is possible to use a filter or an on linear sensibility camera. To test if a parallel mounting of the two cameras would work a cold test were done in the grate test facility at KMW in Norrtaelje. With the pictures from this test we were able to do 3D measurements of the bed topography. The conclusions are that it is possible to measure bed height and bed topography with other camera positions than we were able to use in this experiment. The particle size is easier to measure before entering the boiler for examples over a rim were the particles falling down. It is also possible to estimate a temperature zone were the steam goes off.

  18. Image Processing, Computer Vision, and Deep Learning: new approaches to the analysis and physics interpretation of LHC events

    Science.gov (United States)

    Schwartzman, A.; Kagan, M.; Mackey, L.; Nachman, B.; De Oliveira, L.

    2016-10-01

    This review introduces recent developments in the application of image processing, computer vision, and deep neural networks to the analysis and interpretation of particle collision events at the Large Hadron Collider (LHC). The link between LHC data analysis and computer vision techniques relies on the concept of jet-images, building on the notion of a particle physics detector as a digital camera and the particles it measures as images. We show that state-of-the-art image classification techniques based on deep neural network architectures significantly improve the identification of highly boosted electroweak particles with respect to existing methods. Furthermore, we introduce new methods to visualize and interpret the high level features learned by deep neural networks that provide discrimination beyond physics- derived variables, adding a new capability to understand physics and to design more powerful classification methods at the LHC.

  19. Crossing the divide between computer vision and data bases in search of image data bases

    NARCIS (Netherlands)

    M. Worring; A.W.M. Smeulders

    1998-01-01

    Image databases call upon the combined effort of computing vision and database technology to advance beyond exemplary systems. In this paper we charter several areas for mutually beneficial research activities and provide an architectural design to accommodate it.

  20. Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms

    Science.gov (United States)

    2015-05-01

    workloads specified using OpenVX to be supported in a predictable way. I. INTRODUCTION In the automotive industry today, vision-based sensing through cameras...Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms Glenn A. Elliott, Kecheng Yang, and James H. Anderson...Department of Computer Science, University of North Carolina at Chapel Hill Abstract—In the automotive industry, there is currently great interest in

  1. GpuCV : a GPU-accelerated framework for image processing and computer vision

    OpenAIRE

    ALLUSSE, Yannick; Horain, Patrick; Agarwal, Ankit; Saipriyadarshan, Cindula

    2008-01-01

    International audience; This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and applications. It is meant for computer vision scientist not familiar with GPU technologies. GpuCV is designed to be compatible with the popular OpenCV library by offering GPU-accelera...

  2. CloudCV: Deep Learning and Computer Vision on the Cloud

    OpenAIRE

    Agrawal, Harsh

    2016-01-01

    We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely chall...

  3. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang

    2014-01-01

    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  4. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry

    2009-01-01

    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  5. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    Directory of Open Access Journals (Sweden)

    Luis Pérez

    2016-03-01

    Full Text Available In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  6. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    Science.gov (United States)

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.

    2016-01-01

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030

  7. ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAS

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Das

    2013-02-01

    Full Text Available Soft Computing refers to the science of reasoning, thinking and deduction that recognizes and uses the real world phenomena of grouping, memberships, and classification of various quantities under study. As such, it is an extension of natural heuristics and capable of dealing with complex systems because it does not require strict mathematical definitions and distinctions for the system components. It differs from hard computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role model for soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The main techniques in soft computing are evolutionary computing, artificial neural networks, and fuzzy logic and Bayesian statistics. Each technique can be used separately, but a powerful advantage of soft computing is the complementary nature of the techniques. Used together they can produce solutions to problems that are too complex or inherently noisy to tackle with conventional mathematical methods. The applications of soft computing have proved two main advantages. First, it made solving nonlinear problems, in which mathematical models are not available, possible. Second, it introduced the human knowledge such as cognition, recognition, understanding, learning, and others into the fields of computing. This resulted in the possibility of constructing intelligent systems such as autonomous self-tuning systems, and automated designed systems. This paper highlights various areas of soft computing techniques.

  8. Smartphone, tablet computer and e-reader use by people with vision impairment.

    Science.gov (United States)

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  9. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...

  10. Cone beam computed tomography: A new vision in dentistry

    Directory of Open Access Journals (Sweden)

    Manas Gupta

    2015-01-01

    Full Text Available Cone beam computed tomography (CBCT is a developing imaging technique designed to provide relatively low-dose high-spatial-resolution visualization of high-contrast structures in the head and neck and other anatomic areas. It is a vital content of a dental patient's record. A literature review demonstrated that CBCT has been utilized for oral diagnosis, oral and maxillofacial surgery, endodontics, implantology, orthodontics; temporomandibular joint dysfunction, periodontics, and restorative and forensic dentistry. Recently, higher emphasis has been placed on the CBCT expertise, the three-dimensional (3D images, and virtual models. This literature review showed that the different indications for CBCT are governed by the needs of the specific dental discipline and the type of procedure performed.

  11. Computer Vision Based Methods for Detection and Measurement of Psychophysiological Indicators

    DEFF Research Database (Denmark)

    Irani, Ramin

    patients’ physiological signals due to irritating skin and require huge amount of wires to collect and transmitting the signals. While contact-free computer vision techniques not only can be an easy and economical way to overcome this issue, they provide an automatic recognition of the patients’ emotions...... like pain and stress. This thesis reports a series of works done on contact-free heartbeat estimation, muscle fatigue detection, pain recognition and stress recognition. In measuring physiological parameters, two parameters are considered among many different physiological parameters: heartbeat rate...... to provide visible heartbeat peaks in the signal. A method for physical fatigue time offset detection from facial video is also introduced. One of the major contributions of the thesis, related to monitoring the patients, is recognizing level of pain and stress. The patients’ pain must be continuously...

  12. Road Recognition for Vision Navigation of Robot by Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Jagadeesh Thati,

    2011-07-01

    Full Text Available This paper presents a method for vision navigation of robot by road recognition based on image processing. By taking advantages of the unique structure in road images, the square images on road can be scanned while the robot is moving. In this paper we focused on the pixel position of the images of the corners of the two squares. Large scale experiments on road sequences shows the road detection method is four coordinate system, road types and scenarios. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.

  13. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  14. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  15. The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics.

    Science.gov (United States)

    Chinellato, Eris; Del Pobil, Angel P

    2009-06-01

    The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.

  16. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...... inherent problems explicit and describe potential design decisions for artificial visual systems to deal with the dilemmas....

  17. On the Recognition-by-Components Approach Applied to Computer Vision

    Science.gov (United States)

    Baessmann, Henning; Besslich, Philipp W.

    1990-03-01

    The human visual system is usually able to recognize objects as well as their spatial relations without the support of depth information like stereo vision. For this reason we can easily understand cartoons, photographs and movies. It is the aim of our current research to exploit this aspect of human perception in the context of computer vision. From a monocular TV image we obtain information about the type of an object observed in the scene and its position relative to the camera (viewpoint). This paper deals with the theory of human image understanding as far as used in this system and describes the realization of a vision system based on these principles.

  18. A Computer Vision Method for 3D Reconstruction of Curves-Marked Free-Form Surfaces

    Institute of Scientific and Technical Information of China (English)

    Xiong Hanwei; Zhang Xiangwei

    2001-01-01

    Visual method is now broadly used in reverse engineering for 3D reconstruction. Thetraditional computer vision methods are feature-based, i.e., they require that the objects must revealfeatures owing to geometry or textures. For textureless free-form surfaces, dense feature points areadded artificially. In this paper, a new method is put forward combining computer vision with CAGD.The surface is subdivided into N-side Gregory patches using marked curves, and a stereo algorithm isused to reconstruct the curves. Then, the cross boundary tangent vector is computed throughreflectance analysis. At last, the whole surface can be reconstructed by jointing these patches withG1 continuity.

  19. Computational Techniques of Electromagnetic Dosimetry for Humans

    Science.gov (United States)

    Hirata, Akimasa; Fujiwara, Osamu

    There has been increasing public concern about the adverse health effects of human exposure to electromagnetic fields. This paper reviews the rationale of international safety guidelines for human protection against electromagnetic fields. Then, this paper also presents computational techniques to conduct dosimetry in anatomically-based human body models. Computational examples and remaining problems are also described briefly.

  20. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  1. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  2. Cyborg systems as platforms for computer-vision algorithm-development for astrobiology

    Science.gov (United States)

    McGuire, Patrick Charles; Rodríguez Manfredi, José Antonio; Martínez, Eduardo Sebastián; Gómez Elvira, Javier; Díaz Martínez, Enrique; Ormö, Jens; Neuffer, Kai; Giaquinta, Antonino; Camps Martínez, Fernando; Lepinette Malvitte, Alain; Pérez Mercader, Juan; Ritter, Helge; Oesker, Markus; Ontrup, Jörg; Walter, Jörg

    2004-03-01

    Employing the allegorical imagery from the film "The Matrix", we motivate and discuss our "Cyborg Astrobiologist" research program. In this research program, we are using a wearable computer and video camcorder in order to test and train a computer-vision system to be a field-geologist and field-astrobiologist.

  3. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    Science.gov (United States)

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-03-09

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.

  4. Detección de Posición Angular de Embarcaciones, utilizando Técnicas de Visión Computacional y Redes Neurales Artificiales Target Angular Position Detection using Computer Vision Techniques and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Vilson B Mendes

    2010-01-01

    Full Text Available Este trabajo presenta un sistema de detección de posición angular de buques, utilizando técnicas de extracción de características en imágenes digitales y redes neurales artificiales. Se utilizan imágenes de embarcaciones militares generadas gráficamente. Se realizaron diferentes pruebas usando redes neuronales artificiales aplicadas al conjunto de características geométricas. Los resultados de las pruebas comprueban la importante contribución de la utilización de algoritmos de reconocimiento en la determinación de posicionamiento angular de embarcaciones, independiente del alejamiento del observador. Los resultados favorecen aplicaciones futuras en el seguimiento de buques (tracking utilizando imágenes infrarrojas.This paper presents a system for detecting angular position of targets, using feature extraction techniques in digital imaging and artificial neural networks. Military ships images graphically generated by three-dimensional solid modeling software are used. Several tests using artificial neural networks applied to the set of geometric features were performed. The results show the important contribution of recognition algorithms in determining the ship angular position, regardless of their distance from the observer. The results encourage future applications for tracking targets using infrared images.

  5. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  6. On-chip imaging of Schistosoma haematobium eggs in urine for diagnosis by computer vision.

    Directory of Open Access Journals (Sweden)

    Ewert Linder

    Full Text Available BACKGROUND: Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive "mini-microscopes". Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. METHODS/PRINCIPAL FINDINGS: Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. CONCLUSIONS/SIGNIFICANCE: As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases.

  7. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    Science.gov (United States)

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-04

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  8. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    cylinder by the use of computer vision (or image analysis). Moreover, the imagery data sets, generated as a result of actual recordings of sorting experiments using the indented cylinder, are novel by their high dimensionality and size. Paper II in Appendix B makes one of these data sets available online......The research summarised in this PhD thesis took advantage of methods from computer vision to experimentally analyse the sorting/separation ability of a specific type of seed sorting device – known as an “indented cylinder”. The indented cylinder basically separates incoming seeds into two sub......-groups: (1) “long” seeds and (2) “short” seeds (known as length-separation). The motion of seeds being physically manipulated inside an active indented cylinder was analysed using various computer vision methods. The data from such analyses were used to create an overview of the machine’s ability to separate...

  9. Implementation of Water Quality Management by Fish School Detection Based on Computer Vision Technology

    Directory of Open Access Journals (Sweden)

    Yan Hou

    2015-08-01

    Full Text Available To solve the detection of abnormal water quality, this study proposed a biological water abnormity detection method based on computer vision technology combined with Support Vector Machine (SVM. First, computer vision is used to acquire the parameters of fish school motion feature which can reflect the water quality and then these parameters were preprocessed. Next, the sample set is established and the water quality abnormity monitoring model based on computer vision technology combined with SVM is acquired. At last, the model is used to analyze and evaluate the motion characteristic parameters of fish school under unknown water, in order to indirectly monitor the situation of water quality. In view of great influence of kernel function and parameter optimization to the model, this study compared different kinds of kernel function and then made optimization selection using Particle Swarm Optimization (PSO, Genetic Algorithm (GA and grid search. The results obtained demonstrate that, that method is effective for monitoring water quality abnormity.

  10. Soft Computing Techniques for Process Control Applications

    Directory of Open Access Journals (Sweden)

    Rahul Malhotra

    2011-09-01

    Full Text Available Technological innovations in soft computing techniques have brought automation capabilities to new levelsof applications. Process control is an important application of any industry for controlling the complexsystem parameters, which can greatly benefit from such advancements. Conventional control theory isbased on mathematical models that describe the dynamic behaviour of process control systems. Due to lackin comprehensibility, conventional controllers are often inferior to the intelligent controllers. Softcomputing techniques provide an ability to make decisions and learning from the reliable data or expert’sexperience. Moreover, soft computing techniques can cope up with a variety of environmental and stabilityrelated uncertainties. This paper explores the different areas of soft computing techniques viz. Fuzzy logic,genetic algorithms and hybridization of two and abridged the results of different process control casestudies. It is inferred from the results that the soft computing controllers provide better control on errorsthan conventional controllers. Further, hybrid fuzzy genetic algorithm controllers have successfullyoptimized the errors than standalone soft computing and conventional techniques.

  11. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision.

    Science.gov (United States)

    Warren, William H

    2012-01-01

    David Marr's book Vision attempted to formulate athoroughgoing formal theory of perception. Marr borrowed much of the "computational" level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information are prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet, as a research program leading to a computational model of human vision, Marr's program did not succeed. This article asks why, using the perception of 3D shape as a morality tale. Marr presumed that the goal of vision is to recover a general-purpose Euclidean description of the world, which can be deployed for any task or action. On this formulation, vision is underdetermined by information, which in turn necessitates auxiliary assumptions to solve the problem. But Marr's assumptions did not actually reflect natural constraints, and consequently the solutions were not robust. We now know that humans do not in fact recover Euclidean structure--rather, they reliably perceive qualitative shape (hills, dales, courses, ridges), which is specified by the second-order differential structure of images. By recasting the goals of vision in terms of our perceptual competencies, and doing the hard work of analyzing the information available under ecological constraints, we can reformulate the problem so that perception is determined by information and prior knowledge is unnecessary.

  12. On the Use of Machine Vision Techniques to Detect Human Settlements in Satellite Images

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C; Sengupta, S K; Poland, D; Futterman, J A H

    2003-01-10

    The automated production of maps of human settlement from recent satellite images is essential to studies of urbanization, population movement, and the like. The spectral and spatial resolution of such imagery is often high enough to successfully apply computer vision techniques. However, vast amounts of data have to be processed quickly. In this paper, we propose an approach that processes the data in several different stages. At each stage, using features appropriate to that stage, we identify the portion of the data likely to contain information relevant to the identification of human settlements. This data is used as input to the next stage of processing. Since the size of the data has reduced, we can now use more complex features in this next stage. These features can be more representative of human settlements, and also more time consuming to extract from the image data. Such a hierarchical approach enables us to process large amounts of data in a reasonable time, while maintaining the accuracy of human settlement identification. We illustrate our multi-stage approach using IKONOS 4-band and panchromatic images, and compare it with the straight-forward processing of the entire image.

  13. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  14. An innovative road marking quality assessment mechanism using computer vision

    Directory of Open Access Journals (Sweden)

    Kuo-Liang Lin

    2016-06-01

    Full Text Available Aesthetic quality acceptance for road marking works has been relied on subjective visual examination. Due to a lack of quantitative operation procedures, acceptance outcome can be biased and results in great quality variation. To improve aesthetic quality acceptance procedure of road marking, we develop an innovative road marking quality assessment mechanism, utilizing machine vision technologies. Using edge smoothness as a quantitative aesthetic indicator, the proposed prototype system first receives digital images of finished road marking surface and has the images processed and analyzed to capture the geometric characteristics of the marking. The geometric characteristics are then evaluated to determine the quality level of the finished work. System is demonstrated through two real cases to show how it works. In the end, a test comparing the assessment results between the proposed system and expert inspection is conducted to enhance the accountability of the proposed mechanism.

  15. Image-plane processing for improved computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  16. Image-plane processing for improved computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  17. A Novel Solar Tracker Based on Omnidirectional Computer Vision

    Directory of Open Access Journals (Sweden)

    Zakaria El Kadmiri

    2015-01-01

    Full Text Available This paper presents a novel solar tracker system based on omnidirectional vision technology. The analysis of acquired images with a catadioptric camera allows extracting accurate information about the sun position toward both elevation and azimuth. The main advantages of this system are its wide field of tracking of 360° horizontally and 200° vertically. The system has the ability to track the sun in real time independently of the spatiotemporal coordinates of the site. The extracted information is used to control the two DC motors of the dual-axis mechanism to achieve the optimal orientation of the photovoltaic panels with the aim of increasing the power generation. Several experimental studies have been conducted and the obtained results confirm the power generation efficiency of the proposed solar tracker.

  18. Big data computing: Building a vision for ARS information management

    Science.gov (United States)

    Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...

  19. Computational Techniques for LED Optical Microcavities

    OpenAIRE

    García Santiago, Xavier

    2015-01-01

    The project consist on the development of numerical methods and computational techniques to model the processes of light extraction in power LED (Light-Emitting Diodes) devices. We aim at the use of complex corrugated microstructures to boost the efficiency of our current LUXEON LED products. In order to study extraction efficiency in these devices a 3D optics model of thin film micro-structures must be developed and tested. In this project we develop a numerical model for computing and st...

  20. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  1. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants

    Directory of Open Access Journals (Sweden)

    Pedro J. Navarro

    2016-05-01

    Full Text Available Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN, Naive Bayes Classifier (NBC, and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  2. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  3. Compression Techniques for Improved Algorithm Computational Performance

    Science.gov (United States)

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.

    2005-01-01

    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  4. Computational intelligence techniques in health care

    CERN Document Server

    Zhou, Wengang; Satheesh, P

    2016-01-01

    This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.

  5. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  6. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  7. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    physical health of Indian users especially among college students. Hence, this study was ..... temporary discomfort reduces the efficiency of work and thereby productivity. Health .... computer use, physical activity, stress, and depression among.

  8. Computer and visual display terminals (VDT) vision syndrome (CVDTS)

    National Research Council Canada - National Science Library

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-01-01

    .... However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals...

  9. Computer and visual display terminals (VDT) vision syndrome (CVDTS)

    OpenAIRE

    Parihar, J.K.S.; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K.S.

    2016-01-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the wide...

  10. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    Science.gov (United States)

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  11. Calibration Experiments for a Computer Vision Oyster Volume Estimation System

    Science.gov (United States)

    Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.

    2009-01-01

    Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…

  12. Introduction to computational techniques for boundary layers

    Energy Technology Data Exchange (ETDEWEB)

    Blottner, F.G.

    1979-09-01

    Finite-difference procedures to solve boundary layer flows in fluid mechanics are explained. The governing equations and the transformations utilized are described. Basic solution techniques are illustrated with the similar boundary layer equations. Nonsimilar solutions are developed for the incompressible equations. Various example problems are solved, and the numerical results in the Fortran listing of the computer codes are presented.

  13. Computer vision syndrome: a review of ocular causes and potential treatments.

    Science.gov (United States)

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation.

  14. Target-less computer vision for traffic signal structure vibration studies

    Science.gov (United States)

    Bartilson, Daniel T.; Wieghaus, Kyle T.; Hurlebaus, Stefan

    2015-08-01

    The presented computer vision method allows for non-contact, target-less determination of traffic signal structure displacement and modal parameters, including mode shapes. By using an analytical model to relate structural displacement to stress, it is shown possible to utilize a rapid set-up and take-down computer vision-based system to infer structural stresses to a high degree of precision. Using this computer vision method, natural frequencies of the structure are determined with accuracy similar to strain gage and string potentiometer instrumentation. Even with structural displacements measured at less than 0.5 pixel, excellent mode shape results are obtained. Finally, one-minute equivalent stress ranges from ambient wind excitation are found to have excellent agreement between the inferred stress from strain gage data and stresses calculated from computer vision tied to an analytical stress model. This demonstrates the ability of this method and implemented system to develop fatigue life estimates using wind velocity data and modest technical means.

  15. Computer vision and augmented reality in gastrointestinal endoscopy

    Science.gov (United States)

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.

    2015-01-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175

  16. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage......, the methods are primarily based on statistical analysis and inference. The regression statistics (i.e. R2) of the comparisons of system predictions and manual counts are 0.987 for counting honeybees, and 0.953 and 0.888 for measuring in-activity and out-activity, respectively. The experimental results...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  17. Computer vision and augmented reality in gastrointestinal endoscopy.

    Science.gov (United States)

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M

    2015-08-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy-which relies on the integration of high-definition video data with pathologic correlates-requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. © The Author(s) 2015. Published by Oxford University Press and the Digestive Science Publishing Co. Limited.

  18. Computer Vision Syndrome in Eleven to Eighteen-Year-Old Students in Qazvin

    Directory of Open Access Journals (Sweden)

    Khalaj

    2015-08-01

    Full Text Available Background Prolonged use of computers can lead to complications such as eye strain, eye and head aches, double and blurred vision, tired eyes, irritation, burning and itching eyes, eye redness, light sensitivity, dry eyes, muscle strains, and other problems. Objectives The aim of the present study was to evaluate visual problems and major symptoms, and their associations among computer users, aged between 11 and 18 years old, residing in the Qazvin city of Iran, during year 2010. Patients and Methods This cross-sectional study was done on 642 secondary to pre university students who had referred to the eye clinic of Buali hospital of Qazvin during year 2013. A questionnaire consisting of demographic information and 26 questions on visual effects of the computer was used to gather information. Participants answered all questions and then underwent complete eye examinations and in some cases cycloplegic refraction. Visual acuity (VA was measured with a logMAR in six meters. Refraction errors were determined using an auto refractometer (Potece and Heine retinoscope. The collected data was then analyzed using the SPSS statistical software. Results The results of this study indicated that 63.86% of the subjects had refractive errors. Refractive errors were significantly different in children of different genders (P < 0.05. The most common complaints associated with the continuous use of computers were eyestrain, eye pain, eye redness, headache, and blurred vision. The most prevalent (81.8% eye-related problem in computer users was eyestrain and the least prevalent was dry eyes (7.84%. In order to reduce computer related problems 54.2% of the participants suggested taking enough rest, 37.9% recommended use of computers only for necessary tasks, while 24.4% and 19.1% suggested the use of monitor shields and proper working distance, respectively. Conclusions Our findings revealed that using computers for prolonged periods of time can lead to eye

  19. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  20. Laser Vision-Based Plant Geometries Computation in Greenhouses

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2014-04-01

    Full Text Available Plant growth statuses are important parameters in the greenhouse environment control system. It is time-consumed and less accuracy that measuring the plant geometries manually in greenhouses. To find a portable method to measure the growth parameters of plants portably and automatically, a laser vision-based measurement system was developed in this paper, consisting of a camera and a laser sheet that scanned the plant vertically. All equipments were mounted on a metal shelf in size of 30cm*40cm*100cm. The 3D point cloud was obtained with the laser sheet scanning the plant vertically, while the camera videoing the laser lines which projected on the plant. The calibration was conducted by a two solid boards standing together in an angle of 90. The camera’s internal and external parameters were calibrated by Image toolbox in MatLab®. It is useful to take a reference image without laser light and to use difference images to obtain the laser line. Laser line centers were extracted by improved centroid method. Thus, we obtained the 3D point cloud structure of the sample plant. For leaf length measurement, iteration method for point clouds was used to extract the axis of the leaf point cloud set. Start point was selected at the end of the leaf point cloud set as the first point of the leaf axis. The points in a radian of certain distance around the start point were chosen as the subset. The centroid of the subset of points was calculated and taken as the next axis point. Iteration was continued until all points in the leaf point cloud set were selected. Leaf length was calculated by curve fitting on these axis points. In order to increase the accuracy of curve fitting, bi-directional start point selection was useful. For leaf area estimation, exponential regression model was used to describe the grown leaves for sampled plant (water spinach in this paper. To evaluate the method in a sample of 18 water spinaches, planted in the greenhouse (length 16

  1. Computational Intelligence Techniques for New Product Design

    CERN Document Server

    Chan, Kit Yan; Dillon, Tharam S

    2012-01-01

    Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...

  2. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  3. Computation of Internal Fluid Flows in Channels Using the CFD Software Tool FlowVision

    CERN Document Server

    Kochevsky, A N

    2004-01-01

    The article describes the CFD software tool FlowVision (OOO "Tesis", Moscow). The model equations used for this research are the set of Reynolds and continuity equations and equations of the standard k - e turbulence model. The aim of the paper was testing of FlowVision by comparing the computational results for a number of simple internal channel fluid flows with known experimental data. The test cases are non-swirling and swirling flows in pipes and diffusers, flows in stationary and rotating bends. Satisfactory correspondence of results was obtained both for flow patterns and respective quantitative values.

  4. Indoor scene classification of robot vision based on cloud computing

    Science.gov (United States)

    Hu, Tao; Qi, Yuxiao; Li, Shipeng

    2016-07-01

    For intelligent service robots, indoor scene classification is an important issue. To overcome the weak real-time performance of conventional algorithms, a new method based on Cloud computing is proposed for global image features in indoor scene classification. With MapReduce method, global PHOG feature of indoor scene image is extracted in parallel. And, feature eigenvector is used to train the decision classifier through SVM concurrently. Then, the indoor scene is validly classified by decision classifier. To verify the algorithm performance, we carried out an experiment with 350 typical indoor scene images from MIT LabelMe image library. Experimental results show that the proposed algorithm can attain better real-time performance. Generally, it is 1.4 2.1 times faster than traditional classification methods which rely on single computation, while keeping stable classification correct rate as 70%.

  5. Computational Biology and the Limits of Shared Vision

    DEFF Research Database (Denmark)

    Carusi, Annamaria

    2011-01-01

    of cases is necessary in order to gain a better perspective on social sharing of practices, and on what other factors this sharing is dependent upon. The article presents the case of currently emerging inter-disciplinary visual practices in the domain of computational biology, where the sharing of visual...... practices would be beneficial to the collaborations necessary for the research. Computational biology includes sub-domains where visual practices are coming to be shared across disciplines, and those where this is not occurring, and where the practices of others are resisted. A significant point......, its domain of study. Social practices alone are not sufficient to account for the shaping of evidence. The philosophy of Merleau-Ponty is introduced as providing an alternative framework for thinking of the complex inter-relations between all of these factors. This [End Page 300] philosophy enables us...

  6. Computer Vision Research and Its Applications to Automated Cartography.

    Science.gov (United States)

    1983-07-27

    3. Ikeuchi, K. and Horn, B.K.P., Numerical shape from shading and occluding boundaries, Artificial Inteligence 17 (1981) 141-184. 4. Horn, B.K.P...Fischler, Program Director Principal Investigator, (415)859-5106 Artificial Intelligence Center Computer Science and Technology Division Prepared for...Defense Advanced Research Projects Agency 1400 Wilson Boulevard Arlington, Virginia 22209 Attention: Cdr. Ronald Ohlander, Program Manager Information

  7. Effect of Colored Overlays on Computer Vision Syndrome (CVS

    Directory of Open Access Journals (Sweden)

    Mark Rosenfield, MCOptom, PhD

    2015-06-01

    Full Text Available Background: Colored overlays may produce an improvement in reading when superimposed over printed materials. This study determined whether improvements in reading occur when the overlays are placed over a computer monitor. Methods: Subjects (N=30 read from a computer screen for 10 minutes with either a Cerium or control overlay positioned on the monitor. In a third condition, no overlay was present. Immediately following each trial, subjects reported ocular and visual symptoms experienced during the task. Results: Mean symptom scores following the Cerium, control, and no overlay conditions were 12.83, 17.37, and 15.65, respectively (p=0.47. However, a subgroup of 7 subjects (23% reported significant improvements with the Cerium overlay. The mean symptom scores for the Cerium, control, and no overlay trials for this subgroup were 12.14, 29.86, and 28.93, respectively (p=0.03. No significant improvements in either reading speed or reading errors were observed in this subgroup. Conclusion: The use of colored overlays may provide a treatment method for some subjects reporting symptoms during computer use.

  8. TO STUDY THE ROLE OF ERGONOMICS IN THE MANAGEMENT OF COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Anshu

    2016-03-01

    Full Text Available INTRODUCTION Ergonomics is the science of designing the job equipment and workplace to fit the worker by obtaining a correct match between the human body, work related tasks and work tools. By applying the science of ergonomics we can reduce the difficulties faced by computer users. OBJECTIVES To evaluate the efficacy of tear substitutes and the role of ergonomics in the management of Computer Vision Syndrome. Development of counseling plan, initial treatment plan, prevent complications and educate the subjects about the disease process and to enhance public awareness. MATERIALS AND METHODS A minimum of 100 subjects were selected randomly irrespective of gender, place and nature of computer work & ethnic differences. The subjects were between age group of 10-60 years who had been using the computer for a minimum of 2 hours/day for atleast 5-6 days a week. The subjects underwent tests like Schirmer's, Test film breakup time (TBUT, Inter Blink Interval and Ocular surface staining. A Computer Vision score was taken out based on 5 symptoms each of which was given a score of 2. The symptoms included foreign body sensation, redness, eyestrain, blurring of vision and frequent change in refraction. The score of more than 6 was treated as Computer Vision syndrome and the subjects underwent synoptophore tests and refraction. RESULT In the present study where we had divided 100 subjects into 2 groups of 50 each and given tear substitutes only in one group and ergonomics was considered with tear substitutes in the other. We saw that there was more improvement after 4 weeks and 8 weeks in the group taking lubricants and ergonomics into consideration than lubricants alone. More improvement was seen in eyestrain and blurring (P0.05. CONCLUSION Advanced training in proper computer usage can decrease discomfort.

  9. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition.

    Science.gov (United States)

    Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus

    2016-10-28

    In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  10. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Bogdan Mocanu

    2016-10-01

    Full Text Available In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  11. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed.

  12. Factors leading to the computer vision syndrome: an issue at the contemporary workplace.

    Science.gov (United States)

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2007-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  13. Computer vision-based classification of hand grip variations in neurorehabilitation.

    Science.gov (United States)

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  14. A conceptual framework of computations in mid-level vision

    Directory of Open Access Journals (Sweden)

    Jonas eKubilius

    2014-12-01

    Full Text Available If a picture is worth a thousand words, as an English idiom goes, what should those words – or, rather, descriptors – capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii sufficiently robust to apply in practice on realistic images; and (iii able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation and so on, and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization. Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model’s behavior and its limitations.

  15. A conceptual framework of computations in mid-level vision

    Science.gov (United States)

    Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P.

    2014-01-01

    If a picture is worth a thousand words, as an English idiom goes, what should those words—or, rather, descriptors—capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations. PMID:25566044

  16. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  17. Computer vision algorithm for diabetic foot injury identification and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  18. Computer Vision Syndrome and Associated Factors Among Medical and Engineering Students in Chennai

    Science.gov (United States)

    Logaraj, M; Madhupriya, V; Hegde, SK

    2014-01-01

    Background: Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. Aim: The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. Subjects and Methods: A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Results: Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. Conclusion: The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer. PMID:24761234

  19. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego

    2017-01-01

    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  20. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  1. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  2. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  3. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    Science.gov (United States)

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  4. Application of computer vision in studying fire plume behavior of tilting flames

    Science.gov (United States)

    Aminfar, Amirhessam; Cobian Iñiguez, Jeanette; Pham, Stephanie; Chong, Joey; Burke, Gloria; Weise, David; Princevac, Marko

    2016-11-01

    With the development in computer sciences especially in the field of computer vision, image processing has become an inevitable part of flow visualization. Computer vision can be used to visualize flow structure and to quantify its properties. We used a computer vision algorithm to study fire plume tilting when the fire is interacting with a solid wall. As the fire propagates to the wall the amount of air available for the fire to consume will decrease on the wall side. Therefore, the fire will start tilting towards the wall. Aspen wood was used for the fuel source and various configurations of the fuel were investigated. The plume behavior was captured using a digital camera. In the post processing, the flames were isolated from the image by using edge detection technics, making it possible to develop an algorithm to calculate flame height and flame orientation. Moreover, by using an optical flow algorithm we were able to calculate the speed associated with the edges of the flame which is related to the flame propagation speed and effective vertical velocity of the flame. The results demonstrated that as the size of the flame was increasing, the flames started tilting towards the wall. Leading to the conclusion that there should be a critical area of fire in which the flames start to tilt. Also, the algorithm made it possible to calculate a critical distance in which the flame will start orienting towards the wall

  5. Vision-based system identification technique for building structures using a motion capture system

    Science.gov (United States)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  6. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  7. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  8. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    The research summarised in this PhD thesis took advantage of methods from computer vision to experimentally analyse the sorting/separation ability of a specific type of seed sorting device – known as an “indented cylinder”. The indented cylinder basically separates incoming seeds into two sub......-groups: (1) “long” seeds and (2) “short” seeds (known as length-separation). The motion of seeds being physically manipulated inside an active indented cylinder was analysed using various computer vision methods. The data from such analyses were used to create an overview of the machine’s ability to separate...... certain species of seed from each other. Seeds are processed in order to achieve a high-quality end product: a batch of a single species of crop seed. Naturally, farmers need processed clean crop seeds that are free from non-seed impurities, weed seeds, and non-viable or dead crop seeds. Since...

  9. Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision

    Directory of Open Access Journals (Sweden)

    Xiang Wu

    2013-01-01

    Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.

  10. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  11. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2016-12-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  12. Computer vision syndrome: A study of the knowledge, attitudes and practices in Indian Ophthalmologists

    Directory of Open Access Journals (Sweden)

    Bali Jatinder

    2007-01-01

    Full Text Available Purpose: To study the knowledge, attitude and practices (KAP towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether ′computer use by practitioners′ had any bearing on the knowledge and practices in computer vision syndrome (CVS. Materials and Methods: A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. Results: All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%, headache (82.1%, tiredness and burning sensation (79.1%, watering (66.4% and redness (61.2%. Ophthalmologists using computers reported that focusing from distance to near and vice versa ( P =0.006, χ2 test, blurred vision at a distance ( P =0.016, χ2 test and blepharospasm ( P =0.026, χ2 test formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7% were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7% or spectral filters. Computer-users were more likely to prescribe sedatives/ anxiolytics ( P = 0.04, χ2 test, spectacles ( P = 0.02, χ2 test and conscious frequent blinking ( P = 0.003, χ2 test than the non-computer-users. Conclusions: All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  13. Tools and techniques for computational reproducibility.

    Science.gov (United States)

    Piccolo, Stephen R; Frampton, Michael B

    2016-07-11

    When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.

  14. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  15. Comparison of Computer Vision and Photogrammetric Approaches for Epipolar Resampling of Image Sequence.

    Science.gov (United States)

    Kim, Jae-In; Kim, Taejung

    2016-03-22

    Epipolar resampling is the procedure of eliminating vertical disparity between stereo images. Due to its importance, many methods have been developed in the computer vision and photogrammetry field. However, we argue that epipolar resampling of image sequences, instead of a single pair, has not been studied thoroughly. In this paper, we compare epipolar resampling methods developed in both fields for handling image sequences. Firstly we briefly review the uncalibrated and calibrated epipolar resampling methods developed in computer vision and photogrammetric epipolar resampling methods. While it is well known that epipolar resampling methods developed in computer vision and in photogrammetry are mathematically identical, we also point out differences in parameter estimation between them. Secondly, we tested representative resampling methods in both fields and performed an analysis. We showed that for epipolar resampling of a single image pair all uncalibrated and photogrammetric methods tested could be used. More importantly, we also showed that, for image sequences, all methods tested, except the photogrammetric Bayesian method, showed significant variations in epipolar resampling performance. Our results indicate that the Bayesian method is favorable for epipolar resampling of image sequences.

  16. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  18. Practice of Ergonomic Principles and Computer Vision Syndrome (CVS among Undergraduates Students in Chennai

    Directory of Open Access Journals (Sweden)

    Muthunarayanan Logaraj

    2013-04-01

    Full Text Available ABSTRACT Background: With increasing use of computers by young adults in educational institutions as well as at home there is a need to investigate whether students are adopting ergonomic principles when using computers. Objective: To assess the practice of students on ergonomic principles while working on computers and their association with the symptoms of Computer Vision Syndrome (CVS. Methodology: A cross-sectional study was conducted among the undergraduate students using pre-tested structured questionnaire on the demographic profile, practice of ergonomic principles and symptoms of CVS experienced while on continuous computer work within the past one month duration. Results: Out of 416 students studied, 50% of them viewed computer at a distance of 20 to 28 inches, 61 % viewed the computer screen at the same level, 42.8% placed the reference material between monitor and key board, 24.5% tilted screen backward, 75.7% took frequent breaks and 56.0% blinked frequently to prevent CVS. Students who viewed the computer at a distance of less than 20 inches, viewed upwards or downwards to see the computer, who did not avoid glare and did not took frequent breaks were at higher risk of developing CVS. Students who did not used adjustable chair, height adjustable keyboard and anti-glare screen were at higher risk of developing CVS. Conclusion: The students who were not practicing ergonomics principle and did not check posture and make ergonomic alteration were at higher risk of developing CVS. Keywords: Ergonomic principles, computer vision syndrome, undergraduate students. [Natl J Med Res 2013; 3(2.000: 111-116

  19. Computer vision system approach in colour measurements of foods: Part II. validation of methodology with real foods

    Directory of Open Access Journals (Sweden)

    Fatih TARLAK

    2016-01-01

    Full Text Available Abstract The colour of food is one of the most important factors affecting consumers’ purchasing decision. Although there are many colour spaces, the most widely used colour space in the food industry is L*a*b* colour space. Conventionally, the colour of foods is analysed with a colorimeter that measures small and non-representative areas of the food and the measurements usually vary depending on the point where the measurement is taken. This leads to the development of alternative colour analysis techniques. In this work, a simple and alternative method to measure the colour of foods known as “computer vision system” is presented and justified. With the aid of the computer vision system, foods that are homogenous and uniform in colour and shape could be classified with regard to their colours in a fast, inexpensive and simple way. This system could also be used to distinguish the defectives from the non-defectives. Quality parameters of meat and dairy products could be monitored without any physical contact, which causes contamination during sampling.

  20. Soft computing techniques in voltage security analysis

    CERN Document Server

    Chakraborty, Kabir

    2015-01-01

    This book focuses on soft computing techniques for enhancing voltage security in electrical power networks. Artificial neural networks (ANNs) have been chosen as a soft computing tool, since such networks are eminently suitable for the study of voltage security. The different architectures of the ANNs used in this book are selected on the basis of intelligent criteria rather than by a “brute force” method of trial and error. The fundamental aim of this book is to present a comprehensive treatise on power system security and the simulation of power system security. The core concepts are substantiated by suitable illustrations and computer methods. The book describes analytical aspects of operation and characteristics of power systems from the viewpoint of voltage security. The text is self-contained and thorough. It is intended for senior undergraduate students and postgraduate students in electrical engineering. Practicing engineers, Electrical Control Center (ECC) operators and researchers will also...

  1. Technique for Calibration of Chassis components based on encoding marks and machine Vision metrology

    Institute of Scientific and Technical Information of China (English)

    SONG Li-mei; ZHANG Chun-bo; WEI Yi-ying; CHEN Hua-wei

    2011-01-01

    @@ A novel technique for calibrating crucial parameters of chassis components is proposed, which utilizes the machine vision metrology to measure 3D coordinates of the center of a component's hole for assembling in the 3D world coordinate system.In the measurement, encoding marks with special patterns will be assembled on the chassis component associated with cross drone and staff gauge located near the chassis.The geometry and coordinates of the cross drone consist of two planes orthogonal to each other and the staff gauge is in 3D space with high precision.A few images are taken by a highresolution camera in different orientations and perspectives.The 3D coordinates of 5 key points on the encoding marks will be calculated by the machine vision technique and those of the center of the holes to be calibrated will be calculated by the deduced algorithm in this paper.Experimental results show that the algorithm and the technique can satisfy the precision requirement when the components are assembled, and the average measurement precision provided by the algorithm is 0.0174 mm.

  2. Model of Quantum Computing in the Cloud: The Relativistic Vision Applied in Corporate Networks

    Directory of Open Access Journals (Sweden)

    Chau Sen Shia

    2016-08-01

    Full Text Available Cloud computing has is one of the subjects of interest to information technology professionals and to organizations when the subject covers financial economics and return on investment for companies. This work aims to present as a contribution proposing a model of quantum computing in the cloud using the relativistic physics concepts and foundations of quantum mechanics to propose a new vision in the use of virtualization environment in corporate networks. The model was based on simulation and testing of connection with providers in virtualization environments with Datacenters and implementing the basics of relativity and quantum mechanics in communication with networks of companies, to establish alliances and resource sharing between the organizations. The data were collected and then were performed calculations that demonstrate and identify connections and integrations that establish relations of cloud computing with the relativistic vision, in such a way that complement the approaches of physics and computing with the theories of the magnetic field and the propagation of light. The research is characterized as exploratory, because searches check physical connections with cloud computing, the network of companies and the adhesion of the proposed model. Were presented the relationship between the proposal and the practical application that makes it possible to describe the results of the main features, demonstrating the relativistic model integration with new technologies of virtualization of Datacenters, and optimize the resource with the propagation of light, electromagnetic waves, simultaneity, length contraction and time dilation.

  3. Hybrid computer techniques for solving partial differential equations

    Science.gov (United States)

    Hammond, J. L., Jr.; Odowd, W. M.

    1971-01-01

    Techniques overcome equipment limitations that restrict other computer techniques in solving trivial cases. The use of curve fitting by quadratic interpolation greatly reduces required digital storage space.

  4. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States); Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); McNutt, Todd R. [Department of Radiation Oncology and Molecular Radiation Science, School of Medicine, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Mutic, Sasa [Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri 63110 (United States)

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  5. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, R.D.; Scherrer, B [Boston Children’s Hospital, Boston, MA (United States); Don, S [Washington University, St. Louis, MO (United States)

    2016-06-15

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient. The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.

  6. Vision correction for computer users based on image pre-compensation with changing pupil size.

    Science.gov (United States)

    Huang, Jian; Barreto, Armando; Alonso, Miguel; Adjouadi, Malek

    2011-01-01

    Many computer users suffer varying degrees of visual impairment, which hinder their interaction with computers. In contrast with available methods of vision correction (spectacles, contact lenses, LASIK, etc.), this paper proposes a vision correction method for computer users based on image pre-compensation. The blurring caused by visual aberration is counteracted through the pre-compensation performed on images displayed on the computer screen. The pre-compensation model used is based on the visual aberration of the user's eye, which can be measured by a wavefront analyzer. However, the aberration measured is associated with one specific pupil size. If the pupil has a different size during viewing of the pre-compensated images, the pre-compensation model should also be modified to sustain appropriate performance. In order to solve this problem, an adjustment of the wavefront function used for pre-compensation is implemented to match the viewing pupil size. The efficiency of these adjustments is evaluated with an "artificial eye" (high resolution camera). Results indicate that the adjustment used is successful and significantly improves the images perceived and recorded by the artificial eye.

  7. Clinical efficacy of Ayurvedic management in computer vision syndrome: A pilot study.

    Science.gov (United States)

    Dhiman, Kartar Singh; Ahuja, Deepak Kumar; Sharma, Sanjeev Kumar

    2012-07-01

    Improper use of sense organs, violating the moral code of conduct, and the effect of the time are the three basic causative factors behind all the health problems. Computer, the knowledge bank of modern life, has emerged as a profession causing vision-related discomfort, ocular fatigue, and systemic effects. Computer Vision Syndrome (CVS) is the new nomenclature to the visual, ocular, and systemic symptoms arising due to the long time and improper working on the computer and is emerging as a pandemic in the 21(st) century. On critical analysis of the symptoms of CVS on Tridoshika theory of Ayurveda, as per the road map given by Acharya Charaka, it seems to be a Vata-Pittaja ocular cum systemic disease which needs systemic as well as topical treatment approach. Shatavaryaadi Churna (orally), Go-Ghrita Netra Tarpana (topically), and counseling regarding proper working conditions on computer were tried in 30 patients of CVS. In group I, where oral and local treatment was given, significant improvement in all the symptoms of CVS was observed, whereas in groups II and III, local treatment and counseling regarding proper working conditions, respectively, were given and showed insignificant results. The study verified the hypothesis that CVS in Ayurvedic perspective is a Vata-Pittaja disease affecting mainly eyes and body as a whole and needs a systemic intervention rather than topical ocular medication only.

  8. Computer use and vision-related problems among university students in ajman, United arab emirate.

    Science.gov (United States)

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  9. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  10. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  11. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    Science.gov (United States)

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  12. Computer vision-based limestone rock-type classification using probabilistic neural network

    Institute of Scientific and Technical Information of China (English)

    Ashok Kumar Patel; Snehamoy Chatterjee

    2016-01-01

    Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN) where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classifica-tion algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  13. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  14. The Computer-Vision Symptom Scale (CVSS17): development and initial validation.

    Science.gov (United States)

    González-Pérez, Mariano; Susi, Rosario; Antona, Beatriz; Barrio, Ana; González, Enrique

    2014-06-17

    To develop a questionnaire (in Spanish) to measure computer-related visual and ocular symptoms (CRVOS). A pilot questionnaire was created by consulting the literature, clinicians, and video display terminal (VDT) workers. The replies of 636 subjects completing the questionnaire were assessed using the Rasch model and conventional statistics to generate a new scale, designated the Computer-Vision Symptom Scale (CVSS17). Validity and reliability were determined by Rasch fit statistics, principal components analysis (PCA), person separation, differential item functioning (DIF), and item-person targeting. To assess construct validity, the CVSS17 was correlated with a Rasch-based visual discomfort scale (VDS) in 163 VDT workers, this group completed the CVSS17 twice in order to assess test-retest reliability (two-way single-measure intraclass correlation coefficient [ICC] and their 95% confidence intervals, and the coefficient of repeatability [COR]). The CVSS17 contains 17 items exploring 15 different symptoms. These items showed good reliability and internal consistency (mean square infit and outfit 0.88-1.17, eigenvalue for the first residual PCA component 1.37, person separation 2.85, and no DIF). Pearson's correlation with VDS scores was 0.60 (P computer workers. : Spanish Abstract. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  15. Towards Domain Ontology Creation Based on a Taxonomy Structure in Computer Vision

    Directory of Open Access Journals (Sweden)

    Sadgal mohamed

    2016-02-01

    Full Text Available In computer vision to create a knowledge base usable by information systems, we need a data structure facilitating the information access. Artificial intelligence community uses the ontologies to structure and represent the domain knowledge. This information structure can be used as a database of many geographic information systems (GIS or information systems treating real objects for example road scenes, besides it can be utilized by other systems. For this, we provide a process to create a taxonomy structure based on new hierarchical image clustering method. The hierarchical relation is based on visual object features and contributes to build domain ontology.

  16. Tensor Voting A Perceptual Organization Approach to Computer Vision and Machine Learning

    CERN Document Server

    Mordohai, Philippos

    2006-01-01

    This lecture presents research on a general framework for perceptual organization that was conducted mainly at the Institute for Robotics and Intelligent Systems of the University of Southern California. It is not written as a historical recount of the work, since the sequence of the presentation is not in chronological order. It aims at presenting an approach to a wide range of problems in computer vision and machine learning that is data-driven, local and requires a minimal number of assumptions. The tensor voting framework combines these properties and provides a unified perceptual organiza

  17. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  18. The computer vision in the service of safety and reliability in steam generators inspection services; La vision computacional al servicio de la seguridad y fiabilidad en los servicios de inspeccion en generadores de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Pineiro Fernandez, P.; Garcia Bueno, A.; Cabrera Jordan, E.

    2012-07-01

    The actual computational vision has matured very quickly in the last ten years by facilitating new developments in various areas of nuclear application allowing to automate and simplify processes and tasks, instead or in collaboration with the people and equipment efficiently. The current computer vision (more appropriate than the artificial vision concept) provides great possibilities of also improving in terms of the reliability and safety of NPPS inspection systems.

  19. Visualization techniques for computer network defense

    Science.gov (United States)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  20. Visualization Techniques for Computer Network Defense

    Energy Technology Data Exchange (ETDEWEB)

    Beaver, Justin M [ORNL; Steed, Chad A [ORNL; Patton, Robert M [ORNL; Cui, Xiaohui [ORNL; Schultz, Matthew A [ORNL

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  1. High-Resolution, Semi-Automatic Fault Mapping Using Umanned Aerial Vehicles and Computer Vision: Mapping from an Armchair

    Science.gov (United States)

    Micklethwaite, S.; Vasuki, Y.; Turner, D.; Kovesi, P.; Holden, E.; Lucieer, A.

    2012-12-01

    Our ability to characterise fractures depends upon the accuracy and precision of field techniques, as well as the quantity of data that can be collected. Unmanned Aerial Vehicles (UAVs; otherwise known as "drones") and photogrammetry, provide exciting new opportunities for the accurate mapping of fracture networks, over large surface areas. We use a highly stable, 8 rotor, UAV platform (Oktokopter) with a digital SLR camera and the Structure-from-Motion computer vision technique, to generate point clouds, wireframes, digital elevation models and orthorectified photo mosaics. Furthermore, new image analysis methods such as phase congruency are applied to the data to semiautomatically map fault networks. A case study is provided of intersecting fault networks and associated damage, from Piccaninny Point in Tasmania, Australia. Outcrops >1 km in length can be surveyed in a single 5-10 minute flight, with pixel resolution ~1 cm. Centimetre scale precision can be achieved when selected ground control points are measured using a total station. These techniques have the potential to provide rapid, ultra-high resolution mapping of fracture networks, from many different lithologies; enabling us to more accurately assess the "fit" of observed data relative to model predictions, over a wide range of boundary conditions.igh resolution DEM of faulted outcrop (Piccaninny Point, Tasmania) generated using the Oktokopter UAV (inset) and photogrammetric techniques.

  2. Design of vision concepts to explore the future: Nature, context and design techniques

    NARCIS (Netherlands)

    Mejia Sarmiento, J.R.; Simonse, W.L.

    2015-01-01

    Industrial firms are facing a constant dilemma, to be ready for the future, have a vision, and at the same time act within the current situation, exploit current products efficiently. This research examines visions that embody future opportunities and ideas, “vision concepts” such as concept cars an

  3. Design of vision concepts to explore the future: Nature, context and design techniques

    NARCIS (Netherlands)

    Mejia Sarmiento, J.R.; Simonse, W.L.

    2015-01-01

    Industrial firms are facing a constant dilemma, to be ready for the future, have a vision, and at the same time act within the current situation, exploit current products efficiently. This research examines visions that embody future opportunities and ideas, “vision concepts” such as concept cars

  4. Computer Vision Utilization for Detection of Green House Tomato under Natural Illumination

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2013-02-01

    Full Text Available Agricultural sector experiences the application of automated systems since two decades ago. These systems are applied to harvest fruits in agriculture. Computer vision is one of the technologies that are most widely used in food industries and agriculture. In this paper, an automated system based on computer vision for harvesting greenhouse tomatoes is presented. A CCD camera takes images from workspace and tomatoes with over 50 percent ripeness are detected through an image processing algorithm. In this research three color spaces including RGB, HSI and YCbCr and three algorithms including threshold recognition, curvature of the image and red/green ratio were used in order to identify the ripe tomatoes from background under natural illumination. The average error of threshold recognition, red/green ratio and curvature of the image algorithms were 11.82%, 10.03% and 7.95% in HSI, RGB and YCbCr color spaces, respectively. Therefore, the YCbCr color space and curvature of the image algorithm were identified as the most suitable for recognizing fruits under natural illumination condition.

  5. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  6. Selective cultivation and rapid detection of Staphylococcus aureus by computer vision.

    Science.gov (United States)

    Wang, Yong; Yin, Yongguang; Zhang, Chaonan

    2014-03-01

    In this paper, we developed a selective growth medium and a more rapid detection method based on computer vision for selective isolation and identification of Staphylococcus aureus from foods. The selective medium consisted of tryptic soy broth basal medium, 3 inhibitors (NaCl, K2 TeO3 , and phenethyl alcohol), and 2 accelerators (sodium pyruvate and glycine). After 4 h of selective cultivation, bacterial detection was accomplished using computer vision. The total analysis time was 5 h. Compared to the Baird-Parker plate count method, which requires 4 to 5 d, this new detection method offers great time savings. Moreover, our novel method had a correlation coefficient of greater than 0.998 when compared with the Baird-Parker plate count method. The detection range for S. aureus was 10 to 10(7) CFU/mL. Our new, rapid detection method for microorganisms in foods has great potential for routine food safety control and microbiological detection applications. © 2014 Institute of Food Technologists®

  7. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  8. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  9. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  10. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  11. Advancing crime scene computer forensics techniques

    Science.gov (United States)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  12. New Information Dispersal Techniques for Trustworthy Computing

    Science.gov (United States)

    Parakh, Abhishek

    2011-01-01

    Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…

  13. Computer graphics techniques and computer-generated movies

    Science.gov (United States)

    Holzman, Robert E.; Blinn, James F.

    1988-04-01

    The JPL Computer Graphics Laboratory (CGL) has been using advanced computer graphics for more than ten years to simulate space missions and related activities. Applications have ranged from basic computer graphics used interactively to allow engineers to study problems, to sophisticated color graphics used to simulate missions and produce realistic animations and stills for use by NASA and the scientific press. In addition, the CGL did the computer animation for ``Cosmos'', a series of general science programs done for Public Television in the United States by Carl Sagan and shown world-wide. The CGL recently completed the computer animation for ``The Mechanical Universe'', a series of fifty-two half-hour elementary physics lectures, led by Professor David Goodstein of the California Institute of Technology, and now being shown on Public Television in the US. For this series, the CGL produced more than seven hours of computer animation, averaging approximately eight minutes and thirty seconds of computer animation per half-hour program. Our aim at the JPL Computer Graphics Laboratory (CGL) is the realistic depiction of physical phenomena, that is, we deal primarily in ``science education'' rather than in scientific research. Of course, our attempts to render physical events realistically often require the development of new capabilities through research or technology advances, but those advances are not our primary goal.

  14. Addendum to Research MMMCV; A Man/Microbio/Megabio/Computer Vision

    CERN Document Server

    Alipour, Philip B

    2007-01-01

    In October 2007, a Research Proposal for the University of Sydney, Australia, the author suggested that biovie-physical phenomenon as `electrodynamic dependant biological vision', is governed by relativistic quantum laws and biovision. The phenomenon on the basis of `biovielectroluminescence', satisfies man/microbio/megabio/computer vision (MMMCV), as a robust candidate for physical and visual sciences. The general aim of this addendum is to present a refined text of Sections 1-3 of that proposal and highlighting the contents of its Appendix in form of a `Mechanisms' Section. We then briefly remind in an article aimed for December 2007, by appending two more equations into Section 3, a theoretical II-time scenario as a time model well-proposed for the phenomenon. The time model within the core of the proposal, plays a significant role in emphasizing the principle points on Objectives no. 1-8, Sub-hypothesis 3.1.2, mentioned in Article [arXiv:0710.0410]. It also expresses the time concept in terms of causing q...

  15. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    Science.gov (United States)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  16. Computer vision: automating DEM generation of active lava flows and domes from photos

    Science.gov (United States)

    James, M. R.; Varley, N. R.; Tuffen, H.

    2012-12-01

    Accurate digital elevation models (DEMs) form fundamental data for assessing many volcanic processes. We present a photo-based approach developed within the computer vision community to produce DEMs from a consumer-grade digital camera and freely available software. Two case studies, based on the Volcán de Colima lava dome and the Puyehue Cordón-Caulle obsidian flow, highlight the advantages of the technique in terms of the minimal expertise required, the speed of data acquisition and the automated processing involved. The reconstruction procedure combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) and can generate dense 3D point clouds (millions of points) from multiple photographs of a scene taken from different positions. Processing is carried out by automated software (e.g. http://blog.neonascent.net/archives/bundler-photogrammetry-package/). SfM-MVS reconstructions are initally un-scaled and un-oriented so additional geo-referencing software has been developed. Although this step requires the presence of some control points, the SfM-MVS approach has significantly easier image acquisition and control requirements than traditional photogrammetry, facilitating its use in a broad range of difficult environments. At Colima, the lava dome surface was reconstructed from recent and archive images taken from light aircraft over flights (2007-2011). Scaling and geo-referencing was carried out using features identified in web-sourced ortho-imagery obtained as a basemap layer in ArcMap - no ground-based measurements were required. Average surface measurement densities are typically 10-40 points per m2. Over mean viewing distances of ~500-2500 m (for different surveys), RMS error on the control features is ~1.5 m. The derived DEMs (with 1-m grid resolution) are sufficient to quantify volumetric change, as well as to highlight the structural evolution of the upper surface of the dome following an explosion in June 2011. At Puyehue Cord

  17. The use of in vivo, ex vivo, in vitro, computational models and volunteer studies in vision research and therapy, and their contribution to the Three Rs.

    Science.gov (United States)

    Combes, Robert D; Shah, Atul B

    2016-07-01

    Much is known about mammalian vision, and considerable progress has been achieved in treating many vision disorders, especially those due to changes in the eye, by using various therapeutic methods, including stem cell and gene therapy. While cells and tissues from the main parts of the eye and the visual cortex (VC) can be maintained in culture, and many computer models exist, the current non-animal approaches are severely limiting in the study of visual perception and retinotopic imaging. Some of the early studies with cats and non-human primates (NHPs) are controversial for animal welfare reasons and are of questionable clinical relevance, particularly with respect to the treatment of amblyopia. More recently, the UK Home Office records have shown that attention is now more focused on rodents, especially the mouse. This is likely to be due to the perceived need for genetically-altered animals, rather than to knowledge of the similarities and differences of vision in cats, NHPs and rodents, and the fact that the same techniques can be used for all of the species. We discuss the advantages and limitations of animal and non-animal methods for vision research, and assess their relative contributions to basic knowledge and clinical practice, as well as outlining the opportunities they offer for implementing the principles of the Three Rs (Replacement, Reduction and Refinement). 2016 FRAME.

  18. Cloud Computing Techniques for Space Mission Design

    Science.gov (United States)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  19. Cloud Computing Techniques for Space Mission Design

    Science.gov (United States)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  20. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  1. Optimization techniques for computationally expensive rendering algorithms

    OpenAIRE

    Navarro Gil, Fernando; Gutiérrez Pérez, Diego; Serón Arbeloa, Francisco José

    2012-01-01

    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesi...

  2. Compression Rate Method for Empirical Science and Application to Computer Vision

    CERN Document Server

    Burfoot, Daniel

    2010-01-01

    This philosophical paper proposes a modified version of the scientific method, in which large databases are used instead of experimental observations as the necessary empirical ingredient. This change in the source of the empirical data allows the scientific method to be applied to several aspects of physical reality that previously resisted systematic interrogation. Under the new method, scientific theories are compared by instantiating them as compression programs, and examining the codelengths they achieve on a database of measurements related to a phenomenon of interest. Because of the impossibility of compressing random data, "real world" data can only be compressed by discovering and exploiting the empirical structure it exhibits. The method also provides a new way of thinking about two longstanding issues in the philosophy of science: the problem of induction and the problem of demarcation. The second part of the paper proposes to reformulate computer vision as an empirical science of visual reality, b...

  3. DESIGN OF A NEW TYPE OF AGV BASED ON COMPUTER VISION

    Institute of Scientific and Technical Information of China (English)

    Ji Shouwen; Li Keqiang; Miao Lixin; Wang Rongben; Guo Keyou

    2004-01-01

    The structure, function and working principle of JLUIV-3, which is a new type of automated guided vehicle (AGV) with computer vision, is described.The white stripe line with certain width is used as inductive mark for JLUIV-3 automated navigation.JULIV-3 can automatically recognize the Arabic numeral codes which mark the multi-branch paths and multi-operation buffers, and autonomously select the correct path for destination.Compared with the traditional AGV, it has much more navigation flexibility and less cost, and provides higher-level intelligence.The identification method of navigation path by using neural network and the optimal control method of the AGV are introduced in detail.

  4. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  5. The Event Detection and the Apparent Velocity Estimation Based on Computer Vision

    Science.gov (United States)

    Shimojo, M.

    2012-08-01

    The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.

  6. Universal computer vision system for monitoring the main parameters of wind turbines

    Directory of Open Access Journals (Sweden)

    Korzhavin Sergey

    2016-01-01

    Full Text Available The article presents universal autonomous system of computer vision to monitor the operation of wind turbines. The proposed system allows to estimate the rotational speed and the relative position deviation of the wind turbine. We present a universal method for determining the rotation of wind turbines of various shapes and structures. All obtained data are saved in the database. The presented method was tested at the Territory of Non-traditional Renewable Energy Sources of Ural Federal University Experimental wind turbines is produced by “Scientific and Production Association of automatics named after academician N.A. Semikhatov”. Results show the efficiency of the proposed system and the ability to determine main parameters such as the rotational speed, accuracy and quickness of orientation. The proposed solution is to assume that, in most cases a rotating and central parts of the wind turbine can be allocated different color. The color change of wind blade should not affect the system performance.

  7. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  8. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  9. Using Adaptive Tools and Techniques To Teach a Class of Students Who Are Blind or Low-Vision

    Science.gov (United States)

    Supalo, Cary A.; Mallouk, Thomas E.; Lanouette, James; Amorosi, Christeallia; Wohlers, H. David; McEnnis, Kathleen

    2009-05-01

    A brief overview of the 2007 National Federation of the Blind-Jernigan Institute Youth Slam Chemistry Track, a course of study within a science camp that provided firsthand experimental experience to 200 students who are blind and low-vision, is given. For many of these students, this was their first hands-on experience with laboratory chemistry. Several new blind and low vision-accessible laboratory technologies were successfully debuted. These tools and techniques bring a greater degree of freedom and independence to students with visual impairments in their science classes. Modifications of standard chemistry experiments that incorporated these new tools are described.

  10. Computer vision syndrome among computer office workers in a developing country: an evaluation of prevalence and risk factors.

    Science.gov (United States)

    Ranasinghe, P; Wathurapatha, W S; Perera, Y S; Lamabadusuriya, D A; Kulatunga, S; Jayawardana, N; Katulanda, P

    2016-03-09

    Computer vision syndrome (CVS) is a group of visual symptoms experienced in relation to the use of computers. Nearly 60 million people suffer from CVS globally, resulting in reduced productivity at work and reduced quality of life of the computer worker. The present study aims to describe the prevalence of CVS and its associated factors among a nationally-representative sample of Sri Lankan computer workers. Two thousand five hundred computer office workers were invited for the study from all nine provinces of Sri Lanka between May and December 2009. A self-administered questionnaire was used to collect socio-demographic data, symptoms of CVS and its associated factors. A binary logistic regression analysis was performed in all patients with 'presence of CVS' as the dichotomous dependent variable and age, gender, duration of occupation, daily computer usage, pre-existing eye disease, not using a visual display terminal (VDT) filter, adjusting brightness of screen, use of contact lenses, angle of gaze and ergonomic practices knowledge as the continuous/dichotomous independent variables. A similar binary logistic regression analysis was performed in all patients with 'severity of CVS' as the dichotomous dependent variable and other continuous/dichotomous independent variables. Sample size was 2210 (response rate-88.4%). Mean age was 30.8 ± 8.1 years and 50.8% of the sample were males. The 1-year prevalence of CVS in the study population was 67.4%. Female gender (OR: 1.28), duration of occupation (OR: 1.07), daily computer usage (1.10), pre-existing eye disease (OR: 4.49), not using a VDT filter (OR: 1.02), use of contact lenses (OR: 3.21) and ergonomics practices knowledge (OR: 1.24) all were associated with significantly presence of CVS. The duration of occupation (OR: 1.04) and presence of pre-existing eye disease (OR: 1.54) were significantly associated with the presence of 'severe CVS'. Sri Lankan computer workers had a high prevalence of CVS. Female gender

  11. Application of Computer Vision for quality control in frozen mixed berries production: colour calibration issues

    Directory of Open Access Journals (Sweden)

    D. Ricauda Aimonino

    2013-09-01

    Full Text Available Computer vision is becoming increasingly important in quality control of many food processes. The appearance properties of food products (colour, texture, shape and size are, in fact, correlated with organoleptic characteristics and/or the presence of defects. Quality control based on image processing eliminates the subjectivity of human visual inspection, allowing rapid and non-destructive analysis. However, most food matrices show a wide variability in appearance features, therefore robust and customized image elaboration algorithms have to be implemented for each specific product. For this reason, quality control by visual inspection is still rather diffused in several food processes. The case study inspiring this paper concerns the production of frozen mixed berries. Once frozen, different kinds of berries are mixed together, in different amounts, according to a recipe. The correct quantity of each kind of fruit, within a certain tolerance, has to be ensured by producers. Quality control relies on bringing few samples for each production lot (samples of the same weight and, manually, counting the amount of each species. This operation is tedious, subject to errors, and time consuming, while a computer vision system (CVS could determine the amount of each kind of berries in a few seconds. This paper discusses the problem of colour calibration of the CVS used for frozen berries mixture evaluation. Images are acquired by a digital camera coupled with a dome lighting system, which gives a homogeneous illumination on the entire visible surface of the berries, and a flat bed scanner. RBG device dependent data are then mapped onto CIELab colorimetric colour space using different transformation operators. The obtained results show that the proposed calibration procedure leads to colour discrepancies comparable or even below the human eyes sensibility.

  12. Computer vision and driver distraction: developing a behaviour-flagging protocol for naturalistic driving data.

    Science.gov (United States)

    Kuo, Jonny; Koppel, Sjaan; Charlton, Judith L; Rudin-Brown, Christina M

    2014-11-01

    Naturalistic driving studies (NDS) allow researchers to discreetly observe everyday, real-world driving to better understand the risk factors that contribute to hazardous situations. In particular, NDS designs provide high ecological validity in the study of driver distraction. With increasing dataset sizes, current best practice of manually reviewing videos to classify the occurrence of driving behaviours, including those that are indicative of distraction, is becoming increasingly impractical. Current statistical solutions underutilise available data and create further epistemic problems. Similarly, technical solutions such as eye-tracking often require dedicated hardware that is not readily accessible or feasible to use. A computer vision solution based on open-source software was developed and tested to improve the accuracy and speed of processing NDS video data for the purpose of quantifying the occurrence of driver distraction. Using classifier cascades, manually-reviewed video data from a previously published NDS was reanalysed and used as a benchmark of current best practice for performance comparison. Two software coding systems were developed - one based on hierarchical clustering (HC), and one based on gender differences (MF). Compared to manual video coding, HC achieved 86 percent concordance, 55 percent reduction in processing time, and classified an additional 69 percent of target behaviour not previously identified through manual review. MF achieved 67 percent concordance, a 75 percent reduction in processing time, and classified an additional 35 percent of target behaviour not identified through manual review. The findings highlight the improvements in processing speed and correctly classifying target behaviours achievable through the use of custom developed computer vision solutions. Suggestions for improved system performance and wider implementation are discussed.

  13. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  14. Binocular robot vision emulating disparity computation in the primary visual cortex.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Kushima, Takayuki; Yagi, Tetsuya

    2008-01-01

    We designed a VLSI binocular vision system that emulates the disparity computation in the primary visual cortex (V1). The system consists of two silicon retinas, orientation chips, and field programmable gate array (FPGA), mimicking a hierarchical architecture of visual information processing in the disparity energy model. The silicon retinas emulate a Laplacian-Gaussian-like receptive field of the vertebrate retina. The orientation chips generate an orientation-selective receptive field by aggregating multiple pixels of the silicon retina, mimicking the Hubel-Wiesel-type feed-forward model in order to emulate a Gabor-like receptive field of simple cells. The FPGA receives outputs from the orientation chips corresponding to the left and right eyes and calculates the responses of the complex cells based on the disparity energy model. The system can provide the responses of complex cells tuned to five different disparities and a disparity map obtained by comparing these energy outputs. Owing to the combination of spatial filtering by analog parallel circuits and pixel-wise computation by hard-wired digital circuits, the present system can execute the disparity computation in real time using compact hardware.

  15. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  16. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    Science.gov (United States)

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena

    2015-06-01

    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  18. Bio-inspired computational techniques based on advanced condition monitoring

    Institute of Scientific and Technical Information of China (English)

    Su Liangcheng; He Shan; Li Xiaoli; Li Xinglin

    2011-01-01

    The application of bio-inspired computational techniques to the field of condition monitoring is addressed.First, the bio-inspired computational techniques are briefly addressed; the advantages and disadvantages of these computational methods are made clear. Then, the roles of condition monitoring in the predictive maintenance and failures prediction and the development trends of condition monitoring are discussed. Finally, a case study on the condition monitoring of grinding machine is described, which shows the application of bio-inspired computational technique to a practical condition monitoring system.

  19. The vertical monitor position for presbyopic computer users with progressive lenses: how to reach clear vision and comfortable head posture.

    Science.gov (United States)

    Weidling, Patrick; Jaschinski, Wolfgang

    2015-01-01

    When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.

  20. Computational Techniques in Radio Neutrino Event Reconstruction

    Science.gov (United States)

    Beydler, M.; ARA Collaboration

    2016-03-01

    The Askaryan Radio Array (ARA) is a high-energy cosmic neutrino detector constructed with stations of radio antennas buried in the ice at the South Pole. Event reconstruction relies on the analysis of the arrival times of the transient radio signals generated by neutrinos interacting within a few kilometers of the detector. Because of its depth dependence, the index of refraction in the ice complicates the interferometric directional reconstruction of possible neutrino events. Currently, there is an ongoing endeavor to enhance the programs used for the time-consuming computations of the curved paths of the transient wave signals in the ice as well as the interferometric beamforming. We have implemented a fast, multi-dimensional spline table lookup of the wave arrival times in order to enable raytrace-based directional reconstructions. Additionally, we have applied parallel computing across multiple Graphics Processing Units (GPUs) in order to perform the beamforming calculations quickly.

  1. Oral omega-3 fatty acids treatment in computer vision syndrome related dry eye.

    Science.gov (United States)

    Bhargava, Rahul; Kumar, Prachi; Phogat, Hemant; Kaur, Avinash; Kumar, Manjushri

    2015-06-01

    To assess the efficacy of dietary consumption of omega-3 fatty acids (O3FAs) on dry eye symptoms, Schirmer test, tear film break up time (TBUT) and conjunctival impression cytology (CIC) in patients with computer vision syndrome. Interventional, randomized, double blind, multi-centric study. Four hundred and seventy eight symptomatic patients using computers for more than 3h per day for minimum 1 year were randomized into two groups: 220 patients received two capsules of omega-3 fatty acids each containing 180mg eicosapentaenoic acid (EPA) and 120mg docosahexaenoic acid (DHA) daily (O3FA group) and 236 patients received two capsules of a placebo containing olive oil daily for 3 months (placebo group). The primary outcome measure was improvement in dry eye symptoms and secondary outcome measures were improvement in Nelson grade and an increase in Schirmer and TBUT scores at 3 months. In the placebo group, before dietary intervention, the mean symptom score, Schirmer, TBUT and CIC scores were 7.5±2, 19.9±4.7mm, 11.5±2s and 1±0.9 respectively, and 3 months later were 6.8±2.2, 20.5±4.7mm, 12±2.2s and 0.9±0.9 respectively. In the O3FA group, these values were 8.0±2.6, 20.1±4.2mm, 11.7±1.6s and 1.2±0.8 before dietary intervention and 3.9±2.2, 21.4±4mm, 15±1.7s, 0.5±0.6 after 3 months of intervention, respectively. This study demonstrates the beneficial effect of orally administered O3FAs in alleviating dry eye symptoms, decreasing tear evaporation rate and improving Nelson grade in patients suffering from computer vision syndrome related dry eye. Copyright © 2015 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  2. Television, computer and portable display device use by people with central vision impairment

    Science.gov (United States)

    Woods, Russell L; Satgunam, PremNandhini

    2011-01-01

    Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants

  3. Sieveless particle size distribution analysis of particulate materials through computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Igathinathane, C. [Mississippi State University (MSU); Pordesimo, L. O. [Mississippi State University (MSU); Columbus, Eugene P [ORNL; Batchelor, William D [ORNL; Sokhansanj, Shahabaddine [ORNL

    2009-05-01

    This paper explores the inconsistency of length-based separation by mechanical sieving of particulate materials with standard sieves, which is the standard method of particle size distribution (PSD) analysis. We observed inconsistencies of length-based separation of particles using standard sieves with manual measurements, which showed deviations of 17 22 times. In addition, we have demonstrated the falling through effect of particles cannot be avoided irrespective of the wall thickness of the sieve. We proposed and utilized a computer vision with image processing as an alternative approach; wherein a user-coded Java ImageJ plugin was developed to evaluate PSD based on length of particles. A regular flatbed scanner acquired digital images of particulate material. The plugin determines particles lengths from Feret's diameter and width from pixel-march method, or minor axis, or the minimum dimension of bounding rectangle utilizing the digital images after assessing the particles area and shape (convex or nonconvex). The plugin also included the determination of several significant dimensions and PSD parameters. Test samples utilized were ground biomass obtained from the first thinning and mature stand of southern pine forest residues, oak hard wood, switchgrass, elephant grass, giant miscanthus, wheat straw, as well as Basmati rice. A sieveless PSD analysis method utilized the true separation of all particles into groups based on their distinct length (419 639 particles based on samples studied), with each group truly represented by their exact length. This approach ensured length-based separation without the inconsistencies observed with mechanical sieving. Image based sieve simulation (developed separately) indicated a significant effect (P < 0.05) on number of sieves used in PSD analysis, especially with non-uniform material such as ground biomass, and more than 50 equally spaced sieves were required to match the sieveless all distinct particles PSD analysis

  4. Evolutionary Computation Techniques for Predicting Atmospheric Corrosion

    Directory of Open Access Journals (Sweden)

    Amine Marref

    2013-01-01

    Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.

  5. Review on Computational Model for Vision%视觉认知计算模型综述

    Institute of Scientific and Technical Information of China (English)

    黄凯奇; 谭铁牛

    2013-01-01

    视觉认知计算模型作为联系视觉认知和信息计算的有效手段,其研究涉及到认知科学、信息科学等多个交叉学科,具有复杂性和多样性等特点。为能更好地把握其发展规律,文中从视觉计算角度系统总结视觉认知计算模型,以其两个主要来源为主线分别从生物视觉机制和视觉计算理论回顾视觉认知计算模型的发展。根据其研究的特点,对视觉认知计算模型的发展做出一定评述,并指出视觉认知计算模型的发展必将对计算视觉理论和生物视觉机制产生深远影响。%The computational models for vision have the characteristics of complex and diversity, as they come from many subjects such as cognition science and information science. In this paper, the computational models for vision are investigated from the biological visual mechanism and computational vision theory systematically. Some points of view about the prospects of the computational model are presented. The development of the computational model will build the bridge for the computational vision and biological visual mechanism.

  6. Behavioral response of tilapia (Oreochromis niloticus) to acute ammonia stress monitored by computer vision

    Institute of Scientific and Technical Information of China (English)

    XU Jian-yu; MIAO Xiang-wen; LIU Ying; CUI Shao-rong

    2005-01-01

    The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision system. The swimming activity and geometrical parameters such as location of the gravity center and distribution of the fish school were calculated continuously. These behavioral parameters of tilapia school responded sensitively to moderate and high UIA concentration. Under high UIA concentration the fish activity showed a significant increase (P<0.05), exhibiting an avoidance reaction to high ammonia condition, and then decreased gradually. Under moderate and high UIA concentration the school's vertical location had significantly large fluctuation (P<0.05) with the school moving up to the water surface then down to the bottom of the aquarium alternately and tending to crowd together. After several hours' exposure to high UIA level, the school finally stayed at the aquarium bottom. These observations indicate that alterations in fish behavior under acute stress can provide important information useful in predicting the stress.

  7. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  8. Implementation of Computer Vision Based Industrial Fire Safety Automation by Using Neuro-Fuzzy Algorithms

    Directory of Open Access Journals (Sweden)

    Manjunatha K.C.

    2015-03-01

    Full Text Available A computer vision-based automated fire detection and suppression system for manufacturing industries is presented in this paper. Automated fire suppression system plays a very significant role in Onsite Emergency System (OES as it can prevent accidents and losses to the industry. A rule based generic collective model for fire pixel classification is proposed for a single camera with multiple fire suppression chemical control valves. Neuro-Fuzzy algorithm is used to identify the exact location of fire pixels in the image frame. Again the fuzzy logic is proposed to identify the valve to be controlled based on the area of the fire and intensity values of the fire pixels. The fuzzy output is given to supervisory control and data acquisition (SCADA system to generate suitable analog values for the control valve operation based on fire characteristics. Results with both fire identification and suppression systems have been presented. The proposed method achieves up to 99% of accuracy in fire detection and automated suppression.

  9. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  10. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    Science.gov (United States)

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  11. Computer vision on color-band resistor and its cost-effective diffuse light source design

    Science.gov (United States)

    Chen, Yung-Sheng; Wang, Jeng-Yau

    2016-11-01

    Color-band resistor possessing specular surface is worthy of studying in the area of color image processing and color material recognition. The specular reflection and halo effects appearing in the acquired resistor image will result in the difficulty of color band extraction and recognition. A computer vision system is proposed to detect the resistor orientation, segment the resistor's main body, extract and identify the color bands, as well as recognize the color code sequence and read the resistor value. The effectiveness of reducing the specular reflection and halo effects are confirmed by several cheap covers, e.g., paper bowl, cup, or box inside pasted with white paper combining with a ring-type LED controlled automatically by the detected resistor orientation. The calibration of the microscope used to acquire the resistor image is described and the proper environmental light intensity is suggested. Experiments are evaluated by 200 4-band and 200 5-band resistors comprising 12 colors used on color-band resistors and show the 90% above correct rate of reading resistor. The performances reported by the failed number of horizontal alignment, color band extraction, color identification, as well as color code sequence flip over checking confirm the feasibility of the presented approach.

  12. Computer-based and web-based applications for night vision goggle training

    Science.gov (United States)

    Ruffner, John W.; Woodward, Kim G.

    2001-08-01

    Night vision goggles (NVGs) can enhance military and civilian operations at night. With this increased capability comes the requirement to provide suitable training. Results from field experience and accident analyses suggest that problems experienced by NVG users can be attributed to a limited understanding of NVG limitations and to perceptual problems. In addition, there is evidence that NVG skills are perishable and require frequent practice. Format training is available to help users obtain the required knowledge and skills. However, there often is insufficient opportunity to obtain and practice perceptual skills prior to using NVGs in the operational environment. NVG users need early and continued exposure to the night environment across a broad range of visual and operational conditions to develop and maintain the necessary knowledge and perceptual skills. NVG training has consisted of classroom instruction, hands-on training, and simulator training. Advances in computer-based training (CBT) and web-based training (WBT) have made these technologies very appealing as additions to the NVG training mix. This paper discusses our efforts to develop NVG training using multimedia, interactive CBT and WBT for NVG training. We discuss how NVG CBT and WBT can be extended to military and civilian ground, maritime, and aviation NVG training.

  13. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Science.gov (United States)

    Tahir, Humza J; Murray, Ian J; Parry, Neil R A; Aslam, Tariq M

    2014-01-01

    Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  14. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Directory of Open Access Journals (Sweden)

    Humza J Tahir

    Full Text Available Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  15. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  16. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    Science.gov (United States)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  17. Interactive and Audience Adaptive Digital Signage Using Real-Time Computer Vision

    Directory of Open Access Journals (Sweden)

    Robert Ravnik

    2013-02-01

    Full Text Available In this paper we present the development of an interactive, content‐aware and cost‐effective digital signage system. Using a monocular camera installed within the frame of a digital signage display, we employ real‐time computer vision algorithms to extract temporal, spatial and demographic features of the observers, which are further used for observer‐specific broadcasting of digital signage content. The number of observers is obtained by the Viola and Jones face detection algorithm, whilst facial images are registered using multi‐view Active Appearance Models. The distance of the observers from the system is estimated from the interpupillary distance of registered faces. Demographic features, including gender and age group, are determined using SVM classifiers to achieve individual observer‐specific selection and adaption of the digital signage broadcasting content. The developed system was evaluated at the laboratory study level and in a field study performed for audience measurement research. Comparison of our monocular localization module with the Kinect stereo‐system reveals a comparable level of accuracy. The facial characterization module is evaluated on the FERET database with 95% accuracy for gender classification and 92% for age group. Finally, the field study demonstrates the applicability of the developed system in real‐life environments.

  18. Real-Time Evaluation of Breast Self-Examination Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Eman Mohammadi

    2014-01-01

    Full Text Available Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  19. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  20. Computer vision-guided robotic system for electrical power lines maintenance

    Science.gov (United States)

    Tremblay, Jack; Laliberte, T.; Houde, Regis; Pelletier, Michel; Gosselin, Clement M.; Laurendeau, Denis

    1995-12-01

    The paper presents several modules of a computer vision assisted robotic system for the maintenance of live electrical power lines. The basic scene of interest is composed of generic components such as a crossarm, a power line and a porcelain insulator. The system is under the supervision of an operator which validates each subtask. The system uses a 3D range finder mounted at the end effector of a 6 dof manipulator for the acquisition of range data on the scene. Since more than one view is required to obtain enough information on the scene, a view integration procedure is applied to the data in order to merge the information in a single reference frame. A volumetric description of the scene, in this case an octree, is built using the range data. The octree is transformed into an occupancy grid which is used for avoiding collisions between the manipulator and the components of the scene during the line manipulation step. The collision avoidance module uses the occupancy grid to create a discrete electrostatic potential field representing the various goals (e.g. objects of interest) and obstacles in the scene. The algorithm takes into account the articular limits of the robot and uses a redundant manipulator to ensure that the collision avoidance constraints do not compete with the task which is to reach a given goal with the end-effector. A pose determination algorithm called Iterative Closest Point is presented. The algorithm allows to compute the pose of the various components of the scene and allows the robot to manipulate these components safely. The system has been tested on an actual scene. The manipulation was successfully implemented using a synchronized geometry range finder mounted on a PUMA 760 robot manipulator under the control of Cartool.

  1. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.

  2. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  3. Computation Techniques for the Volume of a Tetrahedron

    Science.gov (United States)

    Srinivasan, V. K.

    2010-01-01

    The purpose of this article is to discuss specific techniques for the computation of the volume of a tetrahedron. A few of them are taught in the undergraduate multivariable calculus courses. Few of them are found in text books on coordinate geometry and synthetic solid geometry. This article gathers many of these techniques so as to constitute a…

  4. Computer Vision Methods for Improved Mobile Robot State Estimation in Challenging Terrains

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2006-11-01

    Full Text Available External perception based on vision plays a critical role in developing improved and robust localization algorithms, as well as gaining important information about the vehicle and the terrain it is traversing. This paper presents two novel methods for rough terrain-mobile robots, using visual input. The first method consists of a stereovision algorithm for real-time 6DoF ego-motion estimation. It integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP scheme. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually referred to as visual odometry. The second method aims at estimating the wheel sinkage of a mobile robot on sandy soil, based on edge detection strategy. A semi-empirical model of wheel sinkage is also presented referring to the classical terramechanics theory. Experimental results obtained with an all-terrain mobile robot and with a wheel sinkage test bed are presented to validate our approach. It is shown that the proposed techniques can be integrated in control and planning algorithms to improve the performance of ground vehicles operating in uncharted environments.

  5. Using Computer Vision and Depth Sensing to Measure Healthcare Worker-Patient Contacts and Personal Protective Equipment Adherence Within Hospital Rooms.

    Science.gov (United States)

    Chen, Junyang; Cremer, James F; Zarei, Kasra; Segre, Alberto M; Polgreen, Philip M

    2016-01-01

    Background.  We determined the feasibility of using computer vision and depth sensing to detect healthcare worker (HCW)-patient contacts to estimate both hand hygiene (HH) opportunities and personal protective equipment (PPE) adherence. Methods.  We used multiple Microsoft Kinects to track the 3-dimensional movement of HCWs and their hands within hospital rooms. We applied computer vision techniques to recognize and determine the position of fiducial markers attached to the patient's bed to determine the location of the HCW's hands with respect to the bed. To measure our system's ability to detect HCW-patient contacts, we counted each time a HCW's hands entered a virtual rectangular box aligned with a patient bed. To measure PPE adherence, we identified the hands, torso, and face of each HCW on room entry, determined the color of each body area, and compared it with the color of gloves, gowns, and face masks. We independently examined a ground truth video recording and compared it with our system's results. Results.  Overall, for touch detection, the sensitivity was 99.7%, with a positive predictive value of 98.7%. For gowned entrances, sensitivity was 100.0% and specificity was 98.15%. For masked entrances, sensitivity was 100.0% and specificity was 98.75%; for gloved entrances, the sensitivity was 86.21% and specificity was 98.28%. Conclusions.  Using computer vision and depth sensing, we can estimate potential HH opportunities at the bedside and also estimate adherence to PPE. Our fine-grained estimates of how and how often HCWs interact directly with patients can inform a wide range of patient-safety research.

  6. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  7. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    Science.gov (United States)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  8. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.

  9. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  10. A clinical study on "Computer vision syndrome" and its management with Triphala eye drops and Saptamrita Lauha.

    Science.gov (United States)

    Gangamma, M P; Poonam; Rajagopala, Manjusha

    2010-04-01

    American Optometric Association (AOA) defines computer vision syndrome (CVS) as "Complex of eye and vision problems related to near work, which are experienced during or related to computer use". Most studies indicate that Video Display Terminal (VDT) operators report more eye related problems than non-VDT office workers. The causes for the inefficiencies and the visual symptoms are a combination of individual visual problems and poor office ergonomics. In this clinical study on "CVS", 151 patients were registered, out of whom 141 completed the treatment. In Group A, 45 patients had been prescribed Triphala eye drops; in Group B, 53 patients had been prescribed the Triphala eye drops and SaptamritaLauha tablets internally, and in Group C, 43 patients had been prescribed the placebo eye drops and placebo tablets. In total, marked improvement was observed in 48.89, 54.71 and 06.98% patients in groups A, B and C, respectively.

  11. 计算机视觉中摄像机定标综述%Summarization of Camera Calibration in Computer Vision

    Institute of Scientific and Technical Information of China (English)

    马伟

    2013-01-01

    This paper puts forward the camera calibration method in computer vision, through analysis of principle of computer vision, and analyzes the application of camera calibration methods in computer vision.%本文通过对计算机视觉原理进行分析,提出了计算机视觉中摄像机的定标方法,并分析了计算机视觉中摄像机的定标方法的应用。

  12. Operational modal analysis on a VAWT in a large wind tunnel using stereo vision technique

    DEFF Research Database (Denmark)

    Najafi, Nadia; Schmidt Paulsen, Uwe

    2017-01-01

    This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections ...... in picking very closely spaced modes. Finally, the uncertainty of the 3D displacement measurement was evaluated by applying a generalized method based on the law of error propagation, for a linear camera model of the stereo vision system.......This paper is about development and use of a research based stereo vision system for vibration and operational modal analysis on a parked, 1-kW, 3-bladed vertical axis wind turbine (VAWT), tested in a wind tunnel at high wind. Vibrations were explored experimentally by tracking small deflections...... of the markers on the structure with two cameras, and also numerically, to study structural vibrations in an overall objective to investigate challenges and to prove the capability of using stereo vision. Two high speed cameras provided displacement measurements at no wind speed interference. The displacement...

  13. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    Science.gov (United States)

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  14. A Review of the Application of Computer Vision to the Inspection and Assessment of Textiles Apparent Properties

    Institute of Scientific and Technical Information of China (English)

    步红刚; 李立轻; 黄秀宝

    2004-01-01

    Due to its advantages of objectiveness, automation, accuracy and fastness in various applications, the technology of computer vision has become one of the studying hotspots in the area of the objective inspection and assessment of textiles apparent properties during the past two decades in the world. Both a brief review of its applications in the recent decade both at home and abroad to the automatic inspection and assessment of the various apparent properties of the textiles, such as yarn, woven fabrics and knitting fabrics, carpet fabrics, nonwoven fabrics and textile webs, etc., and a detailed introduction to the research work including the objective evaluation of fabric wrinkle grade, automatic fabric defects detection and assessment of fabric pilling grade, etc., that was conducted by our research section, i.e., Computer Vision's Textiles Application Research Section, College of Textiles, Dong Hua University, have been provided. Experimental results have proved the feasibilities of the approaches used by us in the applications to the objective inspection and assessment of fabric apparent properties, and also indicated that the technology of computer vision is a power tool for the objective and automatic inspection and assessment of textiles apparent properties, and that it has a bright application future.

  15. Egg volume prediction using machine vision technique based on pappus theorem and artificial neural network.

    Science.gov (United States)

    Soltani, Mahmoud; Omid, Mahmoud; Alimardani, Reza

    2015-05-01

    Egg size is one of the important properties of egg that is judged by customers. Accordingly, in egg sorting and grading, the size of eggs must be considered. In this research, a new method of egg volume prediction was proposed without need to measure weight of egg. An accurate and efficient image processing algorithm was designed and implemented for computing major and minor diameters of eggs. Two methods of egg size modeling were developed. In the first method, a mathematical model was proposed based on Pappus theorem. In second method, Artificial Neural Network (ANN) technique was used to estimate egg volume. The determined egg volume by these methods was compared statistically with actual values. For mathematical modeling, the R(2), Mean absolute error and maximum absolute error values were obtained as 0.99, 0.59 cm(3) and 1.69 cm(3), respectively. To determine the best ANN, R(2) test and RMSEtest were used as selection criteria. The best ANN topology was 2-28-1 which had the R(2) test and RMSEtest of 0.992 and 0.66, respectively. After system calibration, the proposed models were evaluated. The results which indicated the mathematical modeling yielded more satisfying results. So this technique was selected for egg size determination.

  16. Tracking the Creation of Tropical Forest Canopy Gaps with UAV Computer Vision Remote Sensing

    Science.gov (United States)

    Dandois, J. P.

    2015-12-01

    The formation of canopy gaps is fundamental for shaping forest structure and is an important component of ecosystem function. Recent time-series of airborne LIDAR have shown great promise for improving understanding of the spatial distribution and size of forest gaps. However, such work typically looks at gap formation across multiple years and important intra-annual variation in gap dynamics remains unknown. Here we present findings on the intra-annual dynamics of canopy gap formation within the 50 ha forest dynamics plot of Barro Colorado Island (BCI), Panama based on unmanned aerial vehicle (UAV) remote sensing. High-resolution imagery (7 cm GSD) over the 50 ha plot was obtained regularly (≈ every 10 days) beginning October 2014 using a UAV equipped with a point and shoot camera. Imagery was processed into three-dimensional (3D) digital surface models (DSMs) using automated computer vision structure from motion / photogrammetric methods. New gaps that formed between each UAV flight were identified by subtracting DSMs between each interval and identifying areas of large deviation. A total of 48 new gaps were detected from 2014-10-02 to 2015-07-23, with sizes ranging from less than 20 m2 to greater than 350 m2. The creation of new gaps was also evaluated across wet and dry seasons with 4.5 new gaps detected per month in the dry season (Jan. - May) and 5.2 per month outside the dry season (Oct. - Jan. & May - July). The incidence of gap formation was positively correlated with ground-surveyed liana stem density (R2 = 0.77, p UAV remote sensing.

  17. Computer vision-based apple grading for golden delicious apples based on surface features

    Directory of Open Access Journals (Sweden)

    Payman Moallem

    2017-03-01

    Full Text Available In this paper, a computer vision-based algorithm for golden delicious apple grading is proposed which works in six steps. Non-apple pixels as background are firstly removed from input images. Then, stem end is detected by combination of morphological methods and Mahalanobis distant classifier. Calyx region is also detected by applying K-means clustering on the Cb component in YCbCr color space. After that, defects segmentation is achieved using Multi-Layer Perceptron (MLP neural network. In the next step, stem end and calyx regions are removed from defected regions to refine and improve apple grading process. Then, statistical, textural and geometric features from refined defected regions are extracted. Finally, for apple grading, a comparison between performance of Support Vector Machine (SVM, MLP and K-Nearest Neighbor (KNN classifiers is done. Classification is done in two manners which in the first one, an input apple is classified into two categories of healthy and defected. In the second manner, the input apple is classified into three categories of first rank, second rank and rejected ones. In both grading steps, SVM classifier works as the best one with recognition rate of 92.5% and 89.2% for two categories (healthy and defected and three quality categories (first rank, second rank and rejected ones, among 120 different golden delicious apple images, respectively, considering K-folding with K = 5. Moreover, the accuracy of the proposed segmentation algorithms including stem end detection and calyx detection are evaluated for two different apple image databases.

  18. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  19. Combined Error Correction Techniques for Quantum Computing Architectures

    CERN Document Server

    Byrd, M S; Byrd, Mark S.; Lidar, Daniel A.

    2003-01-01

    Proposals for quantum computing devices are many and varied. They each have unique noise processes that make none of them fully reliable at this time. There are several error correction/avoidance techniques which are valuable for reducing or eliminating errors, but not one, alone, will serve as a panacea. One must therefore take advantage of the strength of each of these techniques so that we may extend the coherence times of the quantum systems and create more reliable computing devices. To this end we give a general strategy for using dynamical decoupling operations on encoded subspaces. These encodings may be of any form; of particular importance are decoherence-free subspaces and quantum error correction codes. We then give means for empirically determining an appropriate set of dynamical decoupling operations for a given experiment. Using these techniques, we then propose a comprehensive encoding solution to many of the problems of quantum computing proposals which use exchange-type interactions. This us...

  20. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  1. Exploration of the Theory Framework of Computer Vision%计算机视觉的理论框架探索

    Institute of Scientific and Technical Information of China (English)

    罗阳倩子

    2015-01-01

    This paper expounds the theory framework of computer vision, analyzes the problems of theory framework of computer vision, and puts forward new development of the theory framework of computer vision to ensure that the scene information obtained through computer vision is more complete.%本文就计算机视觉的理论框架进行阐述,对计算机视觉理论框架存在的问题进行分析,提出计算机视觉理论框架的新发展,以确保通过计算机视觉获得的景物信息更加完整。

  2. Study on real-time registration in dual spectrum low level light night vision technique

    Science.gov (United States)

    Bai, Lian-fa; Zhang, Yi; Zhang, Chuang; Chen, Qian; Gu, Guo-hua

    2009-07-01

    In low level light (LLL) color night vision technology, dual spectrum images with respective special information were acquired, and target identification probability would be effectively improved through dual spectrum image fusion. Image registration is one of the key technologies during this process. Current dual spectrum image registration methods mainly include dual imaging channel common optical axis scheme and image characteristic pixel searching scheme. In dual imaging channel common optical axis scheme, additional prismatic optical components should be used, and large amount of radiative energy was wasted. In image characteristic pixel searching scheme, complicated arithmetic made it difficult for its real time realization. In this paper, dual channel dual spectrum LLL color night vision system structure feature and dual spectrum image characteristics was studied, dual spectrum image gray scale symbiotic matrix 2-dimensional histogram was analysed, and a real time image registration method including electronic digital shifting, pixel extension and extraction was put forward. By the analysis of spatial gray-scale relativity of fusion image, registration precision is quantitatively expressed. Emulation experiments indicate that this arithmetic is fast and exact for our dual channel dual spectrum image registration. This method was realized on dual spectrum LLL color night vision experimental apparatus based on Texas Instruments digital video processing device DM642.

  3. Introduction of Soft Computing Techniques to Welfare Equipment

    OpenAIRE

    Hideyuki, Takagi; Kamohara, Shin'ichi; Kamohara, Shinichi; TAKEDA, Takashi

    1999-01-01

    This paper introduces our research into the use of soft computing techniques for hearing impairment compensation and physical rehabilitation. Evolutionary computation (EC) is used for fitting hearing aids based on an interactive EC and the user's preferences for sound. This technology allows hearing aid users to optimize their hearing aids in any acoustic environment without professional assistance. The virtual reality (VR) system for physical rehabilitation allows patients to train their mus...

  4. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil

    2009-01-01

    membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....

  5. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    Science.gov (United States)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the

  6. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    Science.gov (United States)

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2017-07-30

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  7. Advanced computer graphic techniques for laser range finder (LRF) simulation

    Science.gov (United States)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  8. Surveying co-located space geodesy techniques for ITRF computation

    Science.gov (United States)

    Sarti, P.; Sillard, P.; Vittuari, L.

    2003-04-01

    We present a comprehensive operational methodology, based on classical geodesy triangulation and trilateration, that allows the determination of reference points of the five space geodesy techniques used in ITRF computation (i.e.: DORIS, GPS, LLR, SLR, VLBI). Most of the times, for a single technique, the reference point is not accessible and measurable directly. Likewise, no mechanically determined ex-center with respect to an external and measurable point is usually given. In these cases, it is not possible to directly measure the sought reference points and it is even less straightforward to obtain the statistical information relating these points for different techniques. We outline the most general practical surveying methodology that permits to recover the reference points of the different techniques regardless of their physical materialization. We also give a detailed analytical approach for less straightforward cases (e.g.: non geodetic VLBI antennae and SLR/LLR systems). We stress the importance of surveying instrumentation and procedure in achieving the best possible results and outline the impact of the information retrieved with our method in ITRF computation. In particular, we will give numerical examples of computation of the reference point of VLBI antennae (Ny Aalesund and Medicina) and the ex-centre vector computation linking co-located VLBI and GPS techniques in Medicina (Italy). A special attention was paid to the rigorous derivation of statistical elements. They will be presented in an other presentation.

  9. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  10. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  11. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    Science.gov (United States)

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly.

  12. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  13. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  14. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  15. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P

    1994-01-01

    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are

  16. THE DOMAIN DECOMPOSITION TECHNIQUES FOR THE FINITE ELEMENT PROBABILITY COMPUTATIONAL METHODS

    Institute of Scientific and Technical Information of China (English)

    LIU Xiaoqi

    2000-01-01

    In this paper, we shall study the domain decomposition techniques for the finite element probability computational methods. These techniques provide a theoretical basis for parallel probability computational methods.

  17. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  18. Computational technique for stepwise quantitative assessment of equation correctness

    Science.gov (United States)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  19. Temporomandibular joint computed tomography: development of a direct sagittal technique

    Energy Technology Data Exchange (ETDEWEB)

    van der Kuijl, B.; Vencken, L.M.; de Bont, L.G.; Boering, G. (Univ. of Groningen, (Netherlands))

    1990-12-01

    Radiology plays an important role in the diagnosis of temporomandibular disorders. Different techniques are used with computed tomography offering simultaneous imaging of bone and soft tissues. It is therefore suited for visualization of the articular disk and may be used in patients with suspected internal derangements and other disorders of the temporomandibular joint. Previous research suggests advantages to direct sagittal scanning, which requires special positioning of the patient and a sophisticated scanning technique. This study describes the development of a new technique of direct sagittal computed tomographic imaging of the temporomandibular joint using a specially designed patient table and internal light visor positioning. No structures other than the patient's head are involved in the imaging process, and misleading artifacts from the arm or the shoulder are eliminated. The use of the scanogram allows precise correction of the condylar axis and selection of exact slice level.

  20. Using Computing Intelligence Techniques to Estimate Software Effort

    Directory of Open Access Journals (Sweden)

    Jin-Cherng Lin

    2013-02-01

    Full Text Available In the IT industry, precisely estimate the effort of each software project the development cost and scheduleare count for much to the software company. So precisely estimation of man power seems to be gettingmore important. In the past time, the IT companies estimate the work effort of man power by humanexperts, using statistics method. However, the outcomes are always unsatisfying the management level.Recently it becomes an interesting topic if computing intelligence techniques can do better in this field. Thisresearch uses some computing intelligence techniques, such as Pearson product-moment correlationcoefficient method and one-way ANOVA method to select key factors, and K-Means clustering algorithm todo project clustering, to estimate the software project effort. The experimental result show that usingcomputing intelligence techniques to estimate the software project effort can get more precise and moreeffective estimation than using traditional human experts did.

  1. Enhancement of vision systems based on runway detection by image processing techniques

    Science.gov (United States)

    Gulec, N.; Sen Koktas, N.

    2012-06-01

    An explicit way of facilitating approach and landing operations of fixed-wing aircraft in degraded visual environments is presenting a coherent image of the designated runway via vision systems and hence increasing the situational awareness of the flight crew. Combined vision systems, in general, aim to provide a clear view of the aircraft exterior to the pilots using information from databases and imaging sensors. This study presents a novel method that consists of image-processing and tracking algorithms, which utilize information from navigation systems and databases along with the images from daylight and infrared cameras, for the recognition and tracking of the designated runway through the approach and landing operation. Video data simulating the straight-in approach of an aircraft from an altitude of 5000 ft down to 100 ft is synthetically generated by a COTS tool. A diverse set of atmospheric conditions such as fog and low light levels are simulated in these videos. Detection and false alarm rates are used as the primary performance metrics. The results are presented in a format where the performance metrics are compared against the altitude of the aircraft. Depending on the visual environment and the source of the video, the performance metrics reach up to 98% for DR and down to 5% for FAR.

  2. INJECT AN ELASTIC GRID COMPUTING TECHNIQUES TO OPTIMAL RESOURCE MANAGEMENT TECHNIQUE OPERATIONS

    Directory of Open Access Journals (Sweden)

    R. Surendran

    2013-01-01

    Full Text Available Evaluation of sharing on the Internet well- developed from energetic technique of grid computing. Dynamic Grid Computing is Resource sharing in large level high performance computing networks at worldwide. Existing systems have a Limited innovation for resource management process. In proposed work, Grid Computing is an Internet based computing for Optimal Resource Management Technique Operations (ORMTO. ORMTO are Elastic scheduling algorithm, finding the Best Grid node for a task prediction, Fault tolerance resource selection, Perfect resource co-allocation, Grid balanced Resource matchmaking and Agent based grid service, wireless mobility resource access. Survey the various resource management techniques based on the performance measurement factors like time complexity, Space complexity and Energy complexity find the ORMTO with Grid computing. Objectives of ORMTO will provide an efficient Resource co-allocation automatically for a user who is submitting the job without grid knowledge, design a Grid service (portal for selects the Best Fault tolerant Resource for a given task in a fast, secure and efficient manner and provide an Enhanced grid balancing system for multi-tasking via Hybrid topology based Grid Ranking. Best Quality of Service (QOS parameters are important role in all RMT. Proposed system ORMTO use the greater number of QOS Parameters for better enhancement of existing RMT. In proposed system, follow the enhanced techniques and algorithms use to improve the Grid based ORMTO.

  3. Neural Network Prediction of Failure of Damaged Composite Pressure Vessels from Strain Field Data Acquired by a Computer Vision Method

    Science.gov (United States)

    Russell, Samuel S.; Lansing, Matthew D.

    1997-01-01

    This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC

  4. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Science.gov (United States)

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  5. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  6. Comparability of the performance of in-line computer vision for geometrical verification of parts, produced by Additive Manufacturing

    DEFF Research Database (Denmark)

    Pedersen, David B.; Hansen, Hans N.

    2014-01-01

    in order to verify geometrical tolerances, The paper addresses to which precision, tolerance verification has been achieved, by assessing the reconstruction capability against reference 3D scanning by a selected number of AM processes. Geometrical verification was achieved down to a precision of 20μm......-customized parts with narrow geometrical tolerances require individual verification whereas many hyper-complex parts simply cannot be measured by traditional means such as by optical or mechanical measurement tools. This paper address the challenge by detailing how in-line computer vision has been employed...

  7. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    Science.gov (United States)

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  8. Using machine vision and data mining techniques to identify cell properties via microfluidic flow analysis

    Science.gov (United States)

    Horowitz, Geoffrey; Bowie, Samuel; Liu, Anna; Stone, Nicholas; Sulchek, Todd; Alexeev, Alexander

    2016-11-01

    In order to quickly identify the wide range of mechanistic properties that are seen in cell populations, a coupled machine vision and data mining analysis is developed to examine high speed videos of cells flowing through a microfluidic device. The microfluidic device contains a microchannel decorated with a periodical array of diagonal ridges. The ridges compress flowing cells that results in complex cell trajectory and induces cell cross-channel drift, both depend on the cell intrinsic mechanical properties that can be used to characterize specific cell lines. Thus, the cell trajectory analysis can yield a parameter set that can serve as a unique identifier of a cell's membership to a specific cell population. By using the correlations between the cell populations and measured cell trajectories in the ridged microchannel, mechanical properties of individual cells and their specific populations can be identified via only information captured using video analysis. Financial support provided by National Science Foundation (NSF) Grant No. CMMI 1538161.

  9. Error analysis in correlation computation of single particle reconstruction technique

    Institute of Scientific and Technical Information of China (English)

    胡悦; 隋森芳

    1999-01-01

    The single particle reconstruction technique has become particularly important in the structure analysis of hiomaeromolecules. The problem of reconstructing a picture from identical samples polluted by colored noises is studied, and the alignment error in the correlation computation of single particle reconstruction technique is analyzed systematically. The concept of systematic error is introduced, and the explicit form of the systematic error is given under the weak noise approximation. The influence of the systematic error on the reconstructed picture is discussed also, and an analytical formula for correcting the distortion in the picture reconstruction is obtained.

  10. Computer-Assisted Technique for Surgical Tooth Extraction.

    Science.gov (United States)

    Hamza, Hosamuddin

    2016-01-01

    Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut). Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome). In addition, the conventional surgical cutting tools (surgical burs) are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  11. Computer-Assisted Technique for Surgical Tooth Extraction

    Directory of Open Access Journals (Sweden)

    Hosamuddin Hamza

    2016-01-01

    Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  12. Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach

    CERN Document Server

    Diamant, Emanuel

    2007-01-01

    We live in the Information Age, and information has become a critically important component of our life. The success of the Internet made huge amounts of it easily available and accessible to everyone. To keep the flow of this information manageable, means for its faultless circulation and effective handling have become urgently required. Considerable research efforts are dedicated today to address this necessity, but they are seriously hampered by the lack of a common agreement about "What is information?" In particular, what is "visual information" - human's primary input from the surrounding world. The problem is further aggravated by a long-lasting stance borrowed from the biological vision research that assumes human-like information processing as an enigmatic mix of perceptual and cognitive vision faculties. I am trying to find a remedy for this bizarre situation. Relying on a new definition of "information", which can be derived from Kolmogorov's compexity theory and Chaitin's notion of algorithmic inf...

  13. Low Vision

    Science.gov (United States)

    ... HHS USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for ...

  14. A computer vision integration model for a multi-modal cognitive system

    OpenAIRE

    Vrecko A.; Skocaj D.; Hawes N.; Leonardis A.

    2009-01-01

    We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can combine an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanis...

  15. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  16. Parallel Radiosity Techniques for Mesh-Connected SIMD Computers

    Science.gov (United States)

    1991-07-01

    of equations Ax = b, one can find corresponding stages in the Gauss- Seidel method . The form factor calculation stage corresponds to the computation...to be planar F,, = 0 for all i ), iterative techniques such as the Gauss- Seidel method fare much better for this system. In the progressive refinement...this light, the solution of the radiosity system of equations using the Gauss- Seidel method is a sequential one. at least at a macro level. However

  17. A survey of computational intelligence techniques in protein function prediction.

    Science.gov (United States)

    Tiwari, Arvind Kumar; Srivastava, Rajeev

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.

  18. Effect of Computer Animation Technique on Students' Comprehension of the

    Directory of Open Access Journals (Sweden)

    Gokhan AKSOY

    2013-04-01

    Full Text Available The purpose of this study is to determine the effect of computer animation technique on academic achievement of students in the 'Solar System and Beyond' unit lectured as part of the Science and Technology course of the seventh grade in primary education. The sample of the study consists of 60 students attending to the 7th grade of primary school under two different classes during the 2011-2012 academic year. While the lectures in the class designated as the experiment group were given with computer animation technique, in the class designated as the control group Powerpoint presentations and videos were utilized along with the traditional teaching methods. According to the findings, it was determined that computer animation technique is more effective than traditional teaching methods in terms of enhancing students' achievement. It was also determined in the study that, the Powerpoint presentations and related videos used together with the traditional teaching methods provided to the control group significantly help the students to increase their academic achievement.

  19. Numerical Computational Technique for Scattering from Underwater Objects

    Directory of Open Access Journals (Sweden)

    T. Ratna Mani

    2013-01-01

    Full Text Available Normal 0 false false false EN-IN X-NONE X-NONE MicrosoftInternetExplorer4 This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric study has been carried out, by considering different pulse frequency, pulse length, pulse type (CW, LFM , SFM, sampling frequency, as well as different size , shape of the scatteringbody and grid size. It has been observed that increasing the pulse frequency, sampling frequency and number of elements leads to improved results. However, good amount of accuracy has been achieved with element size less than one third of wave length. The experimental result of the underwater object has been found very close to the`simulated result. This technique is useful for computing forward scatter for inverse scattering applications and as well as to generate forward scatter of very narrow and wide band signals of any pulse type and shape of body.Defence Science Journal, 2013, 63(1, pp.119-126, DOI:http://dx.doi.org/10.14429/dsj.63.779

  20. Numerical Computational Technique for Scattering from Underwater Objects

    Directory of Open Access Journals (Sweden)

    T. Ratna Mani

    2013-01-01

    Full Text Available This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD method, based on the superposition of reflections ,from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric study has been carried out, by considering different pulse frequency, pulse length, pulse type (CW, LFM , SFM, sampling frequency, as well as different size , shape of the scattering body and grid size. It has been observed that increasing the pulse frequency, sampling frequency and number of elements leads to improved results. However, good amount of accuracy has been achieved with element size less than one third of wave length. The experimental result of the underwater object has been found very close to the `simulated result. This technique is useful for computing forward scatter for inverse scattering applications and as well as to generate forward scatter of very narrow and wide band signals of any pulse type and shape of body.