WorldWideScience

Sample records for tutorial computer vision

  1. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    Science.gov (United States)

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed.

  2. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  3. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  4. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  5. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  6. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  7. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  8. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  9. Embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2008-01-01

    Brings together experiences from researchers in the field of embedded computer vision, from both academic and industrial research centers, and covers a broad range of challenges and trade-offs brought about by this paradigm shift. This title offers emphasis on tackling important problems for society, safety, security, health, and mobility.

  10. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  11. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  12. Design and evaluation of a computer tutorial on electric fields

    Science.gov (United States)

    Morse, Jeanne Jackson

    Research has shown that students do not fully understand electric fields and their interactions with charged particles after completing traditional classroom instruction. The purpose of this project was to develop a computer tutorial to remediate some of these difficulties. Research on the effectiveness of computer-delivered instructional materials showed that students would learn better from media incorporating user-controlled interactive graphics. Two versions of the tutorial were tested. One version used interactive graphics and the other used static graphics. The two versions of the tutorial were otherwise identical. This project was done in four phases. Phases I and II were used to refine the topics covered in the tutorial and to test the usability of the tutorial. The final version of the tutorial was tested in Phases III and IV. The tutorial was tested using a pretest-posttest design with a control group. Both tests were administered in an interview setting. The tutorial using interactive graphics was more effective at remediating students' difficulties than the tutorial using static graphics for students in Phase III (p = 0.001). In Phase IV students who viewed the tutorial with static graphics did better than those viewing interactive graphics. The sample size in Phase IV was too small for this to be a statistically meaningful result. Some student reasoning errors were noted during the interviews. These include difficulty with the vector representation of electric fields, treating electric charge as if it were mass, using faulty algebraic reasoning to answer questions involving ratios and proportions, and using Coulomb's law in situations in which it is not appropriate.

  13. An overview of computer vision

    Science.gov (United States)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  14. pyro: Python-based tutorial for computational methods for hydrodynamics

    Science.gov (United States)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  15. Machine vision is not computer vision

    Science.gov (United States)

    Batchelor, Bruce G.; Charlier, Jean-Ray

    1998-10-01

    The identity of Machine Vision as an academic and practical subject of study is asserted. In particular, the distinction between Machine Vision on the one hand and Computer Vision, Digital Image Processing, Pattern Recognition and Artificial Intelligence on the other is emphasized. The article demonstrates through four cases studies that the active involvement of a person who is sensitive to the broad aspects of vision system design can avoid disaster and can often achieve a successful machine that would not otherwise have been possible. This article is a transcript of the key- note address presented at the conference. Since the proceedings are prepared and printed before the conference, it is not possible to include a record of the response to this paper made by the delegates during the round-table discussion. It is hoped to collate and disseminate these via the World Wide Web after the event. (A link will be provided at http://bruce.cs.cf.ac.uk/bruce/index.html.).

  16. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  17. Computer vision syndrome: a review.

    Science.gov (United States)

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  18. Artificial intelligence and computer vision

    CERN Document Server

    Li, Yujie

    2017-01-01

    This edited book presents essential findings in the research fields of artificial intelligence and computer vision, with a primary focus on new research ideas and results for mathematical problems involved in computer vision systems. The book provides an international forum for researchers to summarize the most recent developments and ideas in the field, with a special emphasis on the technical and observational results obtained in the past few years.

  19. Machine Learning for Computer Vision

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2013-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and t...

  20. A Computer-based Tutorial on Double-Focusing Spectrometers

    Science.gov (United States)

    Silbar, Richard R.; Browman, Andrew A.; Mead, William C.; Williams, Robert A.

    1998-10-01

    WhistleSoft is developing a set of computer-based, self-paced tutorials on particle accelerators that targets a broad audience, including undergraduate science majors and industrial technicians. (See http://www.whistlesoft.com/s~ilbar/.) We use multimedia techniques to enhance the student's rate of learning and retention of the material. The tutorials feature interactive On-Screen Laboratories and use hypertext, colored graphics, two- and three-dimensional animations, video, and sound. Parts of our Dipoles module deal with the double-focusing spectrometer and occur throughout the piece. Radial focusing occurs in the section on uniform magnets, while vertical focusing is in the non-uniform magnets section. The student can even understand the √2π bend angle on working through the (intermediate-level) discussion on the Kerst-Serber equations. This talk will present our discussion of this spectrometer, direct to you from the computer screen.

  1. Learning in Computer Vision and Image Understanding

    OpenAIRE

    Greenspan, Hayit

    1994-01-01

    There is an increasing interest in the area of Learning in Computer Vision and Image Understanding, both from researchers in the learning community and from researchers involved with the computer vision world. The field is characterized by a shift away from the classical, purely model-based, computer vision techniques, towards data-driven learning paradigms for solving real-world vision problems.

  2. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations suc

  3. Computer Vision and Mathematical Morphology

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Kropratsch, W.; Klette, R.; Albrecht, R.

    1996-01-01

    Mathematical morphology is a theory of set mappings, modeling binary image transformations, which are invariant under the group of Euclidean translations. This framework turns out to be too restricted for many applications, in particular for computer vision where group theoretical considerations

  4. The Computational Study of Vision.

    Science.gov (United States)

    1988-04-01

    provide only partial information about the 2-D velocity field, due to the aperture problem (Wallach, 1976; Fennema and Thompson, 1979; Burt and...computer vision studies and in biological models of motion measurement (for example, Lappin and Bell, 1976; Pantle and Picciano, 1976; Fennema and...830. Fennema , C. L., Thompson, W. B. 1979. Velocity determination in scenes containing several moving objects. Comput. Graph. Image Proc. 9:301-315

  5. Advances in embedded computer vision

    CERN Document Server

    Kisacanin, Branislav

    2014-01-01

    This illuminating collection offers a fresh look at the very latest advances in the field of embedded computer vision. Emerging areas covered by this comprehensive text/reference include the embedded realization of 3D vision technologies for a variety of applications, such as stereo cameras on mobile devices. Recent trends towards the development of small unmanned aerial vehicles (UAVs) with embedded image and video processing algorithms are also examined. The authoritative insights range from historical perspectives to future developments, reviewing embedded implementation, tools, technolog

  6. Computer vision in microstructural analysis

    Science.gov (United States)

    Srinivasan, Malur N.; Massarweh, W.; Hough, C. L.

    1992-01-01

    The following is a laboratory experiment designed to be performed by advanced-high school and beginning-college students. It is hoped that this experiment will create an interest in and further understanding of materials science. The objective of this experiment is to demonstrate that the microstructure of engineered materials is affected by the processing conditions in manufacture, and that it is possible to characterize the microstructure using image analysis with a computer. The principle of computer vision will first be introduced followed by the description of the system developed at Texas A&M University. This in turn will be followed by the description of the experiment to obtain differences in microstructure and the characterization of the microstructure using computer vision.

  7. COMPUTER VISION SYNDROME: A SHORT REVIEW

    National Research Council Canada - National Science Library

    Sameena; Mohd Inayatullah

    2012-01-01

    .... The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS...

  8. Mahotas: Open source software for scriptable computer vision

    Directory of Open Access Journals (Sweden)

    Luis Pedro Coelho

    2013-07-01

    Full Text Available Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the scientific software ecosystem in this language and can leverage the existing infrastructure developed in that language. Mahotas is released under a liberal open source license (MIT License and is available from http://github.com/luispedro/mahotas and from the Python Package Index (http://pypi.python.org/pypi/mahotas. Tutorials and full API documentation are available online at http://mahotas.readthedocs.org/.

  9. Understanding and Preventing Computer Vision Syndrome

    OpenAIRE

    REDDY SC; LOH KY

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syn...

  10. Parallel Algorithms for Computer Vision.

    Science.gov (United States)

    1989-01-01

    developed algorithms for sev- stage at which they are used, for example by a eral early vision processes, such as edge detection, stere - navigation...system operates by receiving a stream of instructions from its front end computer. A microcontroller receives the instructions, expands each of them...instructions flow into the Connection Machine hardware from the front end. These I macro-instructions are sent to a microcontroller , which expands them

  11. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  12. Understanding and preventing computer vision syndrome.

    Science.gov (United States)

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  13. UNDERSTANDING AND PREVENTING COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    REDDY SC

    2008-01-01

    Full Text Available The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  14. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    Science.gov (United States)

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  15. Computer vision syndrome: A review.

    Science.gov (United States)

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  16. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  17. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  18. Enhanced computer vision with Microsoft Kinect sensor: a review.

    Science.gov (United States)

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  19. Computer Vision for Timber Harvesting

    DEFF Research Database (Denmark)

    Dahl, Anders Lindbjerg

    planning. The investigations in this thesis is done as initial work on a planning and logistic system for timber harvesting called logTracker. In this thesis we have focused on three methods for the logTracker project, which includes image segmentation, image classification, and image retrieval...... segments. The purpose of image segmentation is to make the basis for more advanced computer vision methods like object recognition and classification. Our second method concerns image classification and we present a method where we classify small timber samples to tree species based on Active Appearance...... to the logTracker project and ideas for further development of the system is provided. Building a complete logTracker system is a very demanding task and the conclusion is that it is important to focus on the elements that can bring most value to timber harvest planning. Besides contributing...

  20. Computer vision in the poultry industry

    Science.gov (United States)

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  1. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  2. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  3. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  4. Scale-Space Theory in Computer Vision

    OpenAIRE

    1994-01-01

    A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over certain ranges of scale. "Scale-Space Theory in Computer Vision" describes a formal theory for representing the notion of scale in image data, and shows how this theory applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects rang...

  5. COMPUTER VISION SYNDROME: A SHORT REVIEW.

    OpenAIRE

    Sameena; Mohd Inayatullah

    2012-01-01

    Computers are probably one of the biggest scientific inventions of the modern era, and since then they have become an integral part of our life. The increased usage of computers have lead to variety of ocular symptoms which includ es eye strain, tired eyes, irritation, redness, blurred vision, and diplopia, collectively referred to as Computer Vision Syndrome (CVS). CVS may have a significant impact not only on visual com fort but also occupational productivit...

  6. Biological Basis For Computer Vision: Some Perspectives

    Science.gov (United States)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  7. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  8. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available for extracting information from the raw sensor data. In Withey’s talk, methods suitable for computer vision in autonomous, mobile robots will be described and results from the application of these vision techniques are provided, specifically in a robot system...

  9. The Facilitation of Problem-Based Learning in Medical Education Through a Computer-Mediated Tutorial Laboratory

    Science.gov (United States)

    Myers, A.; Barrows, H.S.; Koschmann, T.D.; Feltovich, P.J.

    1990-01-01

    This paper describes the means by which a computer-supported group interaction system known as the Computer-Mediated Tutorial Laboratory (CMTL) is used to support Problem-Based Learning Tutorials. The Problem-Based Learning Tutorial process has traditionally been solely a group process, sharing both the advantages and the disadvantages of any group process. This paper discusses the nature of Problem-Based Learning, the logistics of integrating computer mediation with the tutorial process and how computer mediation can be used to facilitate the eliciting and recording of individual input while enhancing the powerful effects of the group process.

  10. Object categorization: computer and human vision perspectives

    National Research Council Canada - National Science Library

    Dickinson, Sven J

    2009-01-01

    .... The result of a series of four highly successful workshops on the topic, the book gathers many of the most distinguished researchers from both computer and human vision to reflect on their experience...

  11. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  12. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  13. Computer Vision Method in Human Motion Detection

    Institute of Scientific and Technical Information of China (English)

    FU Li; FANG Shuai; XU Xin-he

    2007-01-01

    Human motion detection based on computer vision is a frontier research topic and is causing an increasing attention in the field of computer vision research. The wavelet transform is used to sharpen the ambiguous edges in human motion image. The shadow's effect to the image processing is also removed. The edge extraction can be successfully realized.This is an effective method for the research of human motion analysis system.

  14. Theories and Algorithms of Computational Vision

    Institute of Scientific and Technical Information of China (English)

    Ma Songde; Tan Tieniu; Hu Zhanyi; Jiang Tianzi; Lu Hanqing

    2005-01-01

    @@ Inspired by the recent progresses in the related fields such as cognitive psychology, neural physiology and neural anatomy, the project aims to put forward new computational theories and algorithms which could overcome the main shortcomings in the Marr's computational theory, a dominant paradigm for the last 20 years in computer vision field.

  15. QUALITY ASSESSMENT OF BISCUITS USING COMPUTER VISION

    Directory of Open Access Journals (Sweden)

    Archana A. Bade

    2016-08-01

    Full Text Available As the developments and customer expectations in the high quality foods are increasing day by day, it becomes very essential for the food industries to maintain the quality of the product. Therefore it is necessary to have the quality inspection system for the product before packaging. Automation in the industry gives better inspection speed as compared to the human vision. The automation based on the computer vision is cost effective, flexible and provides one of the best alternatives for more accurate, fast inspection system. Image processing and image analysis are the vital part of the computer vision system. In this paper, we discuss real time quality inspection of the biscuits of premium class using computer vision. It contains the designing of the system, implementing, verifying it and installation of the complete system at the biscuit industry. Overall system contains Image acquisition, Preprocessing, Important feature extraction using segmentation, Color variations and Interpretation and the system hardware.

  16. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  17. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  18. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  19. Computer vision syndrome (CVS) - Thermographic Analysis

    Science.gov (United States)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  20. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  1. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  2. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  3. Report on Computer Programs for Robotic Vision

    Science.gov (United States)

    Cunningham, R. T.; Kan, E. P.

    1986-01-01

    Collection of programs supports robotic research. Report describes computer-vision software library NASA's Jet Propulsion Laboratory. Programs evolved during past 10 years of research into robotics. Collection includes low- and high-level image-processing software proved in applications ranging from factory automation to spacecraft tracking and grappling. Programs fall into several overlapping categories. Image utilities category are low-level routines that provide computer access to image data and some simple graphical capabilities for displaying results of image processing.

  4. Computational and cognitive neuroscience of vision

    CERN Document Server

    2017-01-01

    Despite a plethora of scientific literature devoted to vision research and the trend toward integrative research, the borders between disciplines remain a practical difficulty. To address this problem, this book provides a systematic and comprehensive overview of vision from various perspectives, ranging from neuroscience to cognition, and from computational principles to engineering developments. It is written by leading international researchers in the field, with an emphasis on linking multiple disciplines and the impact such synergy can lead to in terms of both scientific breakthroughs and technology innovations. It is aimed at active researchers and interested scientists and engineers in related fields.

  5. Bringing Vision-Based Measurements into our Daily Life: A Grand Challenge for Computer Vision Systems

    OpenAIRE

    Scharcanski, Jacob

    2016-01-01

    Bringing computer vision into our daily life has been challenging researchers in industry and in academia over the past decades. However, the continuous development of cameras and computing systems turned computer vision-based measurements into a viable option, allowing new solutions to known problems. In this context, computer vision is a generic tool that can be used to measure and monitor phenomena in wide range of fields. The idea of using vision-based measurements is appealing, since the...

  6. JPL Robotics Laboratory computer vision software library

    Science.gov (United States)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  7. Computer vision cracks the leaf code.

    Science.gov (United States)

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  8. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  9. Potato operation: computer vision for agricultural robotics

    Science.gov (United States)

    Pun, Thierry; Lefebvre, Marc; Gil, Sylvia; Brunet, Denis; Dessimoz, Jean-Daniel; Guegerli, Paul

    1992-03-01

    Each year at harvest time millions of seed potatoes are checked for the presence of viruses by means of an Elisa test. The Potato Operation aims at automatizing the potato manipulation and pulp sampling procedure, starting from bunches of harvested potatoes and ending with the deposit of potato pulp into Elisa containers. Automatizing these manipulations addresses several issues, linking robotic and computer vision. The paper reports on the current status of this project. It first summarizes the robotic aspects, which consist of locating a potato in a bunch, grasping it, positioning it into the camera field of view, pumping the pulp sample and depositing it into a container. The computer vision aspects are then detailed. They concern locating particular potatoes in a bunch and finding the position of the best germ where the drill has to sample the pulp. The emphasis is put on the germ location problem. A general overview of the approach is given, which combines the processing of both frontal and silhouette views of the potato, together with movements of the robot arm (active vision). Frontal and silhouette analysis algorithms are then presented. Results are shown that confirm the feasibility of the approach.

  10. Beam damage detection using computer vision technology

    Science.gov (United States)

    Shi, Jing; Xu, Xiangjun; Wang, Jialai; Li, Gong

    2010-09-01

    In this paper, a new approach for efficient damage detection in engineering structures is introduced. The key concept is to use the mature computer vision technology to capture the static deformation profile of a structure, and then employ profile analysis methods to detect the locations of the damages. By combining with wireless communication techniques, the proposed approach can provide an effective and economical solution for remote monitoring of structure health. Moreover, a preliminary experiment is conducted to verify the proposed concept. A commercial computer vision camera is used to capture the static deformation profiles of cracked cantilever beams under loading. The profiles are then processed to reveal the existence and location of the irregularities on the deformation profiles by applying fractal dimension, wavelet transform and roughness methods, respectively. The proposed concept is validated on both one-crack and two-crack cantilever beam-type specimens. It is also shown that all three methods can produce satisfactory results based on the profiles provided by the vision camera. In addition, the profile quality is the determining factor for the noise level in resultant detection signal.

  11. Performance Measurement for Brain-Computer or Brain-Machine Interfaces: A Tutorial

    Science.gov (United States)

    Thompson, David E.; Quitadamo, Lucia R.; Mainardi, Luca; Laghari, Khalil ur Rehman; Gao, Shangkai; Kindermans, Pieter-Jan; Simeral, John D.; Fazel-Rezai, Reza; Matteucci, Matteo; Falk, Tiago H.; Bianchi, Luigi; Chestek, Cynthia A.; Huggins, Jane E.

    2014-01-01

    Objective Brain-Computer Interfaces (BCIs) have the potential to be valuable clinical tools. However, the varied nature of BCIs, combined with the large number of laboratories participating in BCI research, makes uniform performance reporting difficult. To address this situation, we present a tutorial on performance measurement in BCI research. Approach A workshop on this topic was held at the 2013 International BCI Meeting at Asilomar Conference Center in Pacific Grove, California. This manuscript contains the consensus opinion of the workshop members, refined through discussion in the following months and the input of authors who were unable to attend the workshop. Main Results Checklists for methods reporting were developed for both discrete and continuous BCIs. Relevant metrics are reviewed for different types of BCI research, with notes on their application to encourage uniform application between laboratories. Significance Graduate students and other researchers new to BCI research may find this tutorial a helpful introduction to performance measurement in the field. PMID:24838070

  12. Basics of thermal field theory - a tutorial on perturbative computations

    OpenAIRE

    Laine, Mikko; Vuorinen, Aleksi

    2017-01-01

    These lecture notes, suitable for a two-semester introductory course or self-study, offer an elementary and self-contained exposition of the basic tools and concepts that are encountered in practical computations in perturbative thermal field theory. Selected applications to heavy ion collision physics and cosmology are outlined in the last chapter.

  13. Tutorial: Signal Processing in Brain-Computer Interfaces

    NARCIS (Netherlands)

    Garcia Molina, G.

    2010-01-01

    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential co

  14. The Neurodynamics of Cognition: A Tutorial on Computational Cognitive Neuroscience.

    Science.gov (United States)

    Ashby, F Gregory; Helie, Sebastien

    2011-08-01

    Computational Cognitive Neuroscience (CCN) is a new field that lies at the intersection of computational neuroscience, machine learning, and neural network theory (i.e., connectionism). The ideal CCN model should not make any assumptions that are known to contradict the current neuroscience literature and at the same time provide good accounts of behavior and at least some neuroscience data (e.g., single-neuron activity, fMRI data). Furthermore, once set, the architecture of the CCN network and the models of each individual unit should remain fixed throughout all applications. Because of the greater weight they place on biological accuracy, CCN models differ substantially from traditional neural network models in how each individual unit is modeled, how learning is modeled, and how behavior is generated from the network. A variety of CCN solutions to these three problems are described. A real example of this approach is described, and some advantages and limitations of the CCN approach are discussed.

  15. Replacement of traditional lectures with computer-based tutorials: a case study

    Directory of Open Access Journals (Sweden)

    Derek Lavelle

    1996-12-01

    Full Text Available This paper reports on a pilot project with a group of 60 second-year undergraduates studying the use of standard forms of contract in the construction industry. The project entailed the replacement of two of a series of nine scheduled lectures with a computer-based tutorial. The two main aims of the project were to test the viability of converting existing lecture material into computer-based material on an in-house production basis, and to obtain feedback from the student cohort on their behavioural response to the change in media. The effect on student performance was not measured at this stage of development.

  16. Computer vision technology in log volume inspection

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Log volume inspection is very important in forestry research and paper making engineering. This paper proposed a novel approach based on computer vision technology to cope with log volume inspection. The needed hardware system was analyzed and the details of the inspection algorithms were given. A fuzzy entropy based on image enhancement algorithm was presented for enhancing the image of the cross-section of log. In many practical applications the cross-section is often partially invisible, and this is the major obstacle for correct inspection. To solve this problem, a robust Hausdorff distance method was proposed to recover the whole cross-section. Experiment results showed that this method was efficient.

  17. Computer Vision Using Local Binary Patterns

    CERN Document Server

    Pietikainen, Matti; Zhao, Guoying; Ahonen, Timo

    2011-01-01

    The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, b

  18. Computer vision for microscopy diagnosis of malaria.

    Science.gov (United States)

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  19. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  20. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  1. Schlieren sequence analysis using computer vision

    Science.gov (United States)

    Smith, Nathanial Timothy

    Computer vision-based methods are proposed for extraction and measurement of flow structures of interest in schlieren video. As schlieren data has increased with faster frame rates, we are faced with thousands of images to analyze. This presents an opportunity to study global flow structures over time that may not be evident from surface measurements. A degree of automation is desirable to extract flow structures and features to give information on their behavior through the sequence. Using an interdisciplinary approach, the analysis of large schlieren data is recast as a computer vision problem. The double-cone schlieren sequence is used as a testbed for the methodology; it is unique in that it contains 5,000 images, complex phenomena, and is feature rich. Oblique structures such as shock waves and shear layers are common in schlieren images. A vision-based methodology is used to provide an estimate of oblique structure angles through the unsteady sequence. The methodology has been applied to a complex flowfield with multiple shocks. A converged detection success rate between 94% and 97% for these structures is obtained. The modified curvature scale space is used to define features at salient points on shock contours. A challenge in developing methods for feature extraction in schlieren images is the reconciliation of existing techniques with features of interest to an aerodynamicist. Domain-specific knowledge of physics must therefore be incorporated into the definition and detection phases. Known location and physically possible structure representations form a knowledge base that provides a unique feature definition and extraction. Model tip location and the motion of a shock intersection across several thousand frames are identified, localized, and tracked. Images are parsed into physically meaningful labels using segmentation. Using this representation, it is shown that in the double-cone flowfield, the dominant unsteady motion is associated with large scale

  2. A practical introduction to computer vision with OpenCV

    CERN Document Server

    Dawson-Howe, Kenneth

    2014-01-01

    Explains the theory behind basic computer vision and provides a bridge from the theory to practical implementation using the industry standard OpenCV libraries Computer Vision is a rapidly expanding area and it is becoming progressively easier for developers to make use of this field due to the ready availability of high quality libraries (such as OpenCV 2).  This text is intended to facilitate the practical use of computer vision with the goal being to bridge the gap between the theory and the practical implementation of computer vision. The book will explain how to use the relevant OpenCV

  3. Local spatial frequency analysis for computer vision

    Science.gov (United States)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  4. On computer vision in wireless sensor networks.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Ko, Teresa H.

    2004-09-01

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.

  5. Computer vision research with new imaging technology

    Science.gov (United States)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  6. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  7. Topographic Mapping of Residual Vision by Computer

    Science.gov (United States)

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  8. Topographic Mapping of Residual Vision by Computer

    Science.gov (United States)

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  9. Non-Boolean computing with nanomagnets for computer vision applications

    Science.gov (United States)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  10. Gesture Recognition by Computer Vision: An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  11. Chapter 11. Quality evaluation of apple by computer vision

    Science.gov (United States)

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  12. Gesture Recognition by Computer Vision: An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  13. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  14. A Framework for Generic State Estimation in Computer Vision Applications

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    2001-01-01

    Experimenting and building integrated, operational systems in computational vision poses both theoretical and practical challenges, involving methodologies from control theory, statistics, optimization, computer graphics, and interaction. Consequently, a control and communication structure is needed

  15. A Framework for Generic State Estimation in Computer Vision Applications

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    2001-01-01

    Experimenting and building integrated, operational systems in computational vision poses both theoretical and practical challenges, involving methodologies from control theory, statistics, optimization, computer graphics, and interaction. Consequently, a control and communication structure is needed

  16. On the performances of computer vision algorithms on mobile platforms

    Science.gov (United States)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  17. Robust level set method for computer vision

    Science.gov (United States)

    Si, Jia-rui; Li, Xiao-pei; Zhang, Hong-wei

    2005-12-01

    Level set method provides powerful numerical techniques for analyzing and solving interface evolution problems based on partial differential equations. It is particularly appropriate for image segmentation and other computer vision tasks. However, there exists noise in every image and the noise is the main obstacle to image segmentation. In level set method, the propagation fronts are apt to leak through the gaps at locations of missing or fuzzy boundaries that are caused by noise. The robust level set method proposed in this paper is based on the adaptive Gaussian filter. The fast marching method provides a fast implementation for level set method and the adaptive Gaussian filter can adapt itself to the local characteristics of an image by adjusting its variance. Thus, the different parts of an image can be smoothed in different way according to the degree of noisiness and the type of edges. Experiments results demonstrate that the adaptive Gaussian filter can greatly reduce the noise without distorting the image and made the level set methods more robust and accurate.

  18. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  19. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  20. Computer vision for yarn microtension measurement.

    Science.gov (United States)

    Wang, Qing; Lu, Changhou; Huang, Ran; Pan, Wei; Li, Xueyong

    2016-03-20

    Yarn tension is an important parameter for assuring textile quality. In this paper, an optical method to measure microtension of moving yarn automatically in the winding system is proposed. The proposed method can measure microtension of the moving yarn by analyzing the captured images. With a line laser illuminating the moving yarn, a linear array CCD camera is used to capture the images. Design principles of yarn microtension measuring equipment based on computer vision are presented. A local border difference algorithm is used to search the upper border of the moving yarn as the characteristic line, and Fourier descriptors are used to filter the high-frequency noises caused by unevenness of the yarn diameter. Based on the average value of the characteristic line, the captured images were classified into sagging images and vibration images. The average value is considered a sag coordinate of the sagging images. The peak and trough coordinates of the vibration are obtained by change-point detection. Then, according to axially moving string and catenary theory, we obtain the microtension of the moving yarn. Experiments were performed and compared with a resistance strain sensor, and the results prove that the proposed method is effective and of high accuracy.

  1. Mahotas: Open source software for scriptable computer vision

    OpenAIRE

    Luis Pedro Coelho

    2013-01-01

    Mahotas is a computer vision library for Python. It contains traditional image processing functionality such as filtering and morphological operations as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, a dynamic programming language, which is appropriate for fast development, but the algorithms are implemented in C++ and are tuned for speed. The library is designed to fit in with the s...

  2. COMPUTER VISION APPLIED IN THE PRECISION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer vision and its application in the precision control system are discussed. In the process of fabricating, the accuracy of the products should be controlled reasonably and completely. The precision should be kept and adjusted according to the information of feedback got from the measurement on-line or out-line in different procedures. Computer vision is one of the useful methods to do this. Computer vision and the image manipulation are presented, and based on this, a n-dimensional vector to appraise on precision of machining is given.

  3. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  4. Comparing the Effectiveness of a Supplemental Computer-Based Food Safety Tutorial to Traditional Education in an Introductory Food Science Course

    Science.gov (United States)

    Fajardo-Lira, Claudia; Heiss, Cynthia

    2006-01-01

    The purpose of this study was to ascertain whether a Web-based computer tutorial for food safety is an effective tool in the education of food science and nutrition students. Students completing the Web-based tutorial had a greater improvement in pre-test scores compared with post-test scores and compared with students who attended lecture only.…

  5. Comparing the Effectiveness of a Supplemental Computer-Based Food Safety Tutorial to Traditional Education in an Introductory Food Science Course

    Science.gov (United States)

    Fajardo-Lira, Claudia; Heiss, Cynthia

    2006-01-01

    The purpose of this study was to ascertain whether a Web-based computer tutorial for food safety is an effective tool in the education of food science and nutrition students. Students completing the Web-based tutorial had a greater improvement in pre-test scores compared with post-test scores and compared with students who attended lecture only.…

  6. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    Science.gov (United States)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  7. Use of Computer Vision to Detect Tangles in Tangled Objects

    OpenAIRE

    Parmar, Paritosh

    2014-01-01

    Untangling of structures like ropes and wires by autonomous robots can be useful in areas such as personal robotics, industries and electrical wiring & repairing by robots. This problem can be tackled by using computer vision system in robot. This paper proposes a computer vision based method for analyzing visual data acquired from camera for perceiving the overlap of wires, ropes, hoses i.e. detecting tangles. Information obtained after processing image according to the proposed method compr...

  8. Computer vision and laser scanner road environment perception

    OpenAIRE

    García, Fernando; Ponz Vila, Aurelio; Martín Gómez, David; Escalera, Arturo de la; Armingol, José M.

    2014-01-01

    Data fusion procedure is presented to enhance classical Advanced Driver Assistance Systems (ADAS). The novel vehicle safety approach, combines two classical sensors: computer vision and laser scanner. Laser scanner algorithm performs detection of vehicles and pedestrians based on pattern matching algorithms. Computer vision approach is based on Haar-Like features for vehicles and Histogram of Oriented Gradients (HOG) features for pedestrians. The high level fusion procedure uses Kalman Filter...

  9. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  10. Computer vision for dual spacecraft proximity operations -- A feasibility study

    Science.gov (United States)

    Stich, Melanie Katherine

    A computer vision-based navigation feasibility study consisting of two navigation algorithms is presented to determine whether computer vision can be used to safely navigate a small semi-autonomous inspection satellite in proximity to the International Space Station. Using stereoscopic image-sensors and computer vision, the relative attitude determination and the relative distance determination algorithms estimate the inspection satellite's relative position in relation to its host spacecraft. An algorithm needed to calibrate the stereo camera system is presented, and this calibration method is discussed. These relative navigation algorithms are tested in NASA Johnson Space Center's simulation software, Engineering Dynamic On-board Ubiquitous Graphics (DOUG) Graphics for Exploration (EDGE), using a rendered model of the International Space Station to serve as the host spacecraft. Both vision-based algorithms proved to attain successful results, and the recommended future work is discussed.

  11. Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.

    Directory of Open Access Journals (Sweden)

    Benjamin T Miller

    2014-07-01

    Full Text Available This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing web user interface (http://www.charmming.org. Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets, allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.

  12. Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.

    Science.gov (United States)

    Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee

    2014-07-01

    This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.

  13. Wavelet applied to computer vision in astrophysics

    Science.gov (United States)

    Bijaoui, Albert; Slezak, Eric; Traina, Myriam

    2004-02-01

    Multiscale analyses can be provided by application wavelet transforms. For image processing purposes, we applied algorithms which imply a quasi isotropic vision. For a uniform noisy image, a wavelet coefficient W has a probability density function (PDF) p(W) which depends on the noise statistic. The PDF was determined for many statistical noises: Gauss, Poission, Rayleigh, exponential. For CCD observations, the Anscombe transform was generalized to a mixed Gasus+Poisson noise. From the discrete wavelet transform a set of significant wavelet coefficients (SSWC)is obtained. Many applications have been derived like denoising and deconvolution. Our main application is the decomposition of the image into objects, i.e the vision. At each scale an image labelling is performed in the SSWC. An interscale graph linking the fields of significant pixels is then obtained. The objects are identified using this graph. The wavelet coefficients of the tree related to a given object allow one to reconstruct its image by a classical inverse method. This vision model has been applied to astronomical images, improving the analysis of complex structures.

  14. Safety Computer Vision Rules for Improved Sensor Certification

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints....... The language allows developers to increase trustworthiness in the robot perception system, which we argue would increase compliance with safety standards. We demonstrate the usage of the language to improve reliability in a perception pipeline, thus allowing the vision expert to concisely express the safety...

  15. Application of chaos and fractals to computer vision

    CERN Document Server

    Farmer, Michael E

    2014-01-01

    This book provides a thorough investigation of the application of chaos theory and fractal analysis to computer vision. The field of chaos theory has been studied in dynamical physical systems, and has been very successful in providing computational models for very complex problems ranging from weather systems to neural pathway signal propagation. Computer vision researchers have derived motivation for their algorithms from biology and physics for many years as witnessed by the optical flow algorithm, the oscillator model underlying graphical cuts and of course neural networks. These algorithm

  16. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant…

  17. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  18. DIKU-LASMEA Workshop on Computer Vision, Copenhagen, March, 2009

    DEFF Research Database (Denmark)

    Fihl, Preben

    This report will cover the participation in the DIKU-LASMEA Workshop on Computer Vision held at the department of computer science, University of Copenhagen, in March 2009. The report will give a concise description of the topics presented at the workshop, and briefly discuss how the work relates...

  19. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    Science.gov (United States)

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  20. DIKU-LASMEA Workshop on Computer Vision, Copenhagen, March, 2009

    DEFF Research Database (Denmark)

    Fihl, Preben

    This report will cover the participation in the DIKU-LASMEA Workshop on Computer Vision held at the department of computer science, University of Copenhagen, in March 2009. The report will give a concise description of the topics presented at the workshop, and briefly discuss how the work relates...

  1. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  2. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  3. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  4. OpenCV 3.0 computer vision with Java

    CERN Document Server

    Baggio, Daniel Lélis

    2015-01-01

    If you are a Java developer, student, researcher, or hobbyist wanting to create computer vision applications in Java then this book is for you. If you are an experienced C/C++ developer who is used to working with OpenCV, you will also find this book very useful for migrating your applications to Java. All you need is basic knowledge of Java, with no prior understanding of computer vision required, as this book will give you clear explanations and examples of the basics.

  5. A multidisciplinary approach to solving computer related vision problems.

    Science.gov (United States)

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  6. Photogrammetric computer vision statistics, geometry, orientation and reconstruction

    CERN Document Server

    Förstner, Wolfgang

    2016-01-01

    This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela­tions, tools that are useful also in the context of uncertain reasoning in po...

  7. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  8. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: How we do it

    Energy Technology Data Exchange (ETDEWEB)

    Schlorhaufer, C., E-mail: Schlorhaufer.Celia@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Behrends, M., E-mail: behrends.marianne@mh-hannover.de [Peter L. Reichertz Department of Medical Informatics, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Diekhaus, G., E-mail: Diekhaus.Gesche@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Keberle, M., E-mail: m.keberle@bk-paderborn.de [Department of Diagnostic and Interventional Radiology, Brüderkrankenhaus St. Josef Paderborn, Husener Str. 46, 33098 Paderborn (Germany); Weidemann, J., E-mail: Weidemann.Juergen@mh-hannover.de [Department of Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover (Germany)

    2012-12-15

    Purpose: Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Materials and methods: Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. Results: The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Conclusions: Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools.

  9. Using Advanced Computer Vision Algorithms on Small Mobile Robots

    Science.gov (United States)

    2006-04-20

    this approach is the implementation of advanced computer vision algorithms on small mobile robots . We demonstrate the implementation and testing of the...following two algorithms useful on mobile robots : (1) object classification using a boosted Cascade of classifiers trained with the Adaboost training

  10. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  11. A Knowledge-Intensive Approach to Computer Vision Systems

    NARCIS (Netherlands)

    Koenderink-Ketelaars, N.J.J.P.

    2010-01-01

    This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing

  12. Information theory in computer vision and pattern recognition

    CERN Document Server

    Escolano, Francisco; Bonev, Boyan

    2009-01-01

    Researchers are bringing information theory elements to the computer vision and pattern recognition (CVPR) arena. Among these elements there are measures (entropy, mutual information), principles (maximum entropy, minimax entropy) and theories (rate distortion theory, method of types). This book explores the latter elements.

  13. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  14. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  15. Evaluating the Instructional Efficacy of Computer-Mediated Interactive Multimedia: Comparing Three Elementary Statistics Tutorial Modules.

    Science.gov (United States)

    Gonzalez, Gerardo M.; Birch, Marc A.

    2000-01-01

    This study evaluated three tutorial modules, equivalent in content but different in mode of presentation, for introducing elementary statistics concepts. Fifty-seven college students participated in one of four randomly assigned conditions: paper-and-pencil, basic computerized, computerized multimedia, or control group. Participant evaluations…

  16. Distributed randomized algorithms for opinion formation, centrality computation and power systems estimation: A tutorial overview

    NARCIS (Netherlands)

    Frasca, Paolo; Ishii, Hideaki; Ravazzi, Chiara; Tempo, Roberto

    2015-01-01

    In this tutorial paper, we study three specific applications: opinion formation in social networks, centrality measures in complex networks and estimation problems in large-scale power systems. These applications fall under a general framework which aims at the construction of algorithms for distrib

  17. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  18. Inspecting wood surface roughness using computer vision

    Science.gov (United States)

    Zhao, Xuezeng

    1995-01-01

    Wood surface roughness is one of the important indexes of manufactured wood products. This paper presents an attempt to develop a new method to evaluate manufactured wood surface roughness through the utilization of imaging processing and pattern recognition techniques. In this paper a collimated plane of light or a laser is directed onto the inspected wood surface at a sharp angle of incidence. An optics system that consists of lens focuses the image of the surface onto the objective of a CCD camera, the CCD camera captures the image of the surface and using a CA6300 board digitizes the image. The digitized image is transmitted into a microcomputer. Through the use of the methodology presented in this paper, the computer filters the noise and wood anatomical grain and gives an evaluation of the nature of the manufactured wood surface. The preliminary results indicated that the method has the advantages of non-contact, 3D, high-speed. This method can be used in classification and in- time measurement of manufactured wood products.

  19. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  20. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

    CERN Document Server

    Buyya, Rajkumar; Venugopal, Srikumar

    2008-01-01

    This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision.

  1. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Silvia Matiacevich

    2013-01-01

    Full Text Available Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days, temperature (4 and 15°C, and relative humidity (75 and 90%. Results. Significant differences (P<0.05 were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red and fungal presence (from 0 to 15%, both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements.

  2. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision.

    Science.gov (United States)

    Matiacevich, Silvia; Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: "Duke," "Brigitta", "Elliott", "Centurion", "Star," and "Jewel", measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0-21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements.

  3. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  4. Image Segmentation for Food Quality Evaluation Using Computer Vision System

    Directory of Open Access Journals (Sweden)

    Nandhini. P

    2014-02-01

    Full Text Available Quality evaluation is an important factor in food processing industries using the computer vision system where human inspection systems provide high variability. In many countries food processing industries aims at producing defect free food materials to the consumers. Human evaluation techniques suffer from high labour costs, inconsistency and variability. Thus this paper provides various steps for identifying defects in the food material using the computer vision systems. Various steps in computer vision system are image acquisition, Preprocessing, image segmentation, feature identification and classification. The proposed framework provides the comparison of various filters where the hybrid median filter was selected as the filter with the high PSNR value and is used in preprocessing. Image segmentation techniques such as Colour based binary Image segmentation, Particle swarm optimization are compared and image segmentation parameters such as accuracy, sensitivity , specificity are calculated and found that colour based binary image segmentation is well suited for food quality evaluation. Finally this paper provides an efficient method for identifying the defected parts in food materials.

  5. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Document Server

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  6. Computer vision research at Marshall Space Flight Center

    Science.gov (United States)

    Vinz, Frank L.

    1990-01-01

    Orbital docking, inspection, and sevicing are operations which have the potential for capability enhancement as well as cost reduction for space operations by the application of computer vision technology. Research at MSFC has been a natural outgrowth of orbital docking simulations for remote manually controlled vehicles such as the Teleoperator Retrieval System and the Orbital Maneuvering Vehicle (OMV). Baseline design of the OMV dictates teleoperator control from a ground station. This necessitates a high data-rate communication network and results in several seconds of time delay. Operational costs and vehicle control difficulties could be alleviated by an autonomous or semi-autonomous control system onboard the OMV which would be based on a computer vision system having capability to recognize video images in real time. A concept under development at MSFC with these attributes is based on syntactic pattern recognition. It uses tree graphs for rapid recognition of binary images of known orbiting target vehicles. This technique and others being investigated at MSFC will be evaluated in realistic conditions by the use of MSFC orbital docking simulators. Computer vision is also being applied at MSFC as part of the supporting development for Work Package One of Space Station Freedom.

  7. Computer vision syndrome and ergonomic practices among undergraduate university students.

    Science.gov (United States)

    Mowatt, Lizette; Gordon, Carron; Santosh, Arvind Babu Rajendra; Jones, Thaon

    2017-10-05

    To determine the prevalence of computer vision syndrome (CVS) and ergonomic practices among students in the Faculty of Medical Sciences at The University of the West Indies (UWI), Jamaica. A cross-sectional study was done with a self-administered questionnaire. Four hundred and nine students participated; 78% were females. The mean age was 21.6 years. Neck pain (75.1%), eye strain (67%), shoulder pain (65.5%) and eye burn (61.9%) were the most common CVS symptoms. Dry eyes (26.2%), double vision (28.9%) and blurred vision (51.6%) were the least commonly experienced symptoms. Eye burning (P = .001), eye strain (P = .041) and neck pain (P = .023) were significantly related to level of viewing. Moderate eye burning (55.1%) and double vision (56%) occurred in those who used handheld devices (P = .001 and .007, respectively). Moderate blurred vision was reported in 52% who looked down at the device compared with 14.8% who held it at an angle. Severe eye strain occurred in 63% of those who looked down at a device compared with 21% who kept the device at eye level. Shoulder pain was not related to pattern of use. Ocular symptoms and neck pain were less likely if the device was held just below eye level. There is a high prevalence of Symptoms of CVS amongst university students which could be reduced, in particular neck pain and eye strain and burning, with improved ergonomic practices. © 2017 John Wiley & Sons Ltd.

  8. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Science.gov (United States)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  9. Do You Think You Can? The Influence of Student Self-Efficacy on the Effectiveness of Tutorial Dialogue for Computer Science

    Science.gov (United States)

    Wiggins, Joseph B.; Grafsgaard, Joseph F.; Boyer, Kristy Elizabeth; Wiebe, Eric N.; Lester, James C.

    2017-01-01

    In recent years, significant advances have been made in intelligent tutoring systems, and these advances hold great promise for adaptively supporting computer science (CS) learning. In particular, tutorial dialogue systems that engage students in natural language dialogue can create rich, adaptive interactions. A promising approach to increasing…

  10. Do You Think You Can? The Influence of Student Self-Efficacy on the Effectiveness of Tutorial Dialogue for Computer Science

    Science.gov (United States)

    Wiggins, Joseph B.; Grafsgaard, Joseph F.; Boyer, Kristy Elizabeth; Wiebe, Eric N.; Lester, James C.

    2017-01-01

    In recent years, significant advances have been made in intelligent tutoring systems, and these advances hold great promise for adaptively supporting computer science (CS) learning. In particular, tutorial dialogue systems that engage students in natural language dialogue can create rich, adaptive interactions. A promising approach to increasing…

  11. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  12. MER-DIMES : a planetary landing application of computer vision

    Science.gov (United States)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  13. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Schlessman Jason

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  14. EFFICACY OF TRIPHALA GHRITA NETRATARPAN IN COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Deepak P. Sawant

    2013-04-01

    Full Text Available In present era, the computerization in a country is necessary for the progress. It seems that the work at computer is very intensive and most tiring. Computer Vision Syndrome (CVS is the complex condition of eye and vision problems that are related to near work which are experienced during or related to computer use. Traditional medicine has been practiced for many centuries in many parts of the world. The present study was undertaken to evaluate the effect of Triphala Ghrita Tarpan herbal compound preparation as per the classics in 30 patients suffering from CVS in trial group for 7 days in three consecutive months. The duration of Tarpana was 15-20 minutes. While the control group also included with 30 patients and were advised with certain eye exercise. The results in trial group were satisfactory and Tarpana was found to be effective in treating all the signs and symptoms of CVS which was supported by the statistical analysis (P<0.001.

  15. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  16. Displacement measurement system for inverters using computer micro-vision

    Science.gov (United States)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  17. Shape perception in human and computer vision an interdisciplinary perspective

    CERN Document Server

    Dickinson, Sven J

    2013-01-01

    This comprehensive and authoritative text/reference presents a unique, multidisciplinary perspective on Shape Perception in Human and Computer Vision. Rather than focusing purely on the state of the art, the book provides viewpoints from world-class researchers reflecting broadly on the issues that have shaped the field. Drawing upon many years of experience, each contributor discusses the trends followed and the progress made, in addition to identifying the major challenges that still lie ahead. Topics and features: examines each topic from a range of viewpoints, rather than promoting a speci

  18. Karibu Tutorials

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    These tutorials demonstrate how to use Karibu for high quality data collection, in particular how to setup a distributed Karibu system and how to adapt Karibu to your particular data collection needs....

  19. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  20. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  1. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    Science.gov (United States)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  2. Neural networks and neuroscience-inspired computer vision.

    Science.gov (United States)

    Cox, David Daniel; Dean, Thomas

    2014-09-22

    Brains are, at a fundamental level, biological computing machines. They transform a torrent of complex and ambiguous sensory information into coherent thought and action, allowing an organism to perceive and model its environment, synthesize and make decisions from disparate streams of information, and adapt to a changing environment. Against this backdrop, it is perhaps not surprising that computer science, the science of building artificial computational systems, has long looked to biology for inspiration. However, while the opportunities for cross-pollination between neuroscience and computer science are great, the road to achieving brain-like algorithms has been long and rocky. Here, we review the historical connections between neuroscience and computer science, and we look forward to a new era of potential collaboration, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods. In particular, we explore where neuroscience-inspired algorithms have succeeded, where they still fail, and we identify areas where deeper connections are likely to be fruitful.

  3. Computer vision challenges and technologies for agile manufacturing

    Science.gov (United States)

    Molley, Perry A.

    1996-02-01

    applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  4. Computer vision and action recognition a guide for image processing and computer vision community for action understanding

    CERN Document Server

    Ahad, Md Atiqur Rahman

    2011-01-01

    Human action analyses and recognition are challenging problems due to large variations in human motion and appearance, camera viewpoint and environment settings. The field of action and activity representation and recognition is relatively old, yet not well-understood by the students and research community. Some important but common motion recognition problems are even now unsolved properly by the computer vision community. However, in the last decade, a number of good approaches are proposed and evaluated subsequently by many researchers. Among those methods, some methods get significant atte

  5. Computer vision uncovers predictors of physical urban change.

    Science.gov (United States)

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  6. The Pixhawk Open-Source Computer Vision Framework for Mavs

    Science.gov (United States)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  7. Computer Vision-Based Image Analysis of Bacteria.

    Science.gov (United States)

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  8. Polynomial Eigenvalue Solutions to Minimal Problems in Computer Vision.

    Science.gov (United States)

    Kukelova, Zuzana; Bujnak, Martin; Pajdla, Tomas

    2012-07-01

    We present a method for solving systems of polynomial equations appearing in computer vision. This method is based on polynomial eigenvalue solvers and is more straightforward and easier to implement than the state-of-the-art Gröbner basis method since eigenvalue problems are well studied, easy to understand, and efficient and robust algorithms for solving these problems are available. We provide a characterization of problems that can be efficiently solved as polynomial eigenvalue problems (PEPs) and present a resultant-based method for transforming a system of polynomial equations to a polynomial eigenvalue problem. We propose techniques that can be used to reduce the size of the computed polynomial eigenvalue problems. To show the applicability of the proposed polynomial eigenvalue method, we present the polynomial eigenvalue solutions to several important minimal relative pose problems.

  9. THE PIXHAWK OPEN-SOURCE COMPUTER VISION FRAMEWORK FOR MAVS

    Directory of Open Access Journals (Sweden)

    L. Meier

    2012-09-01

    Full Text Available Unmanned aerial vehicles (UAV and micro air vehicles (MAV are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  10. Heterogeneous compute in computer vision: OpenCL in OpenCV

    Science.gov (United States)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  11. Electromagnetism Tutorial (Tutorial de Eletromagnetismo)

    CERN Document Server

    Dantas, Christine C

    2009-01-01

    The present tutorial aims at covering the fundamentals of electromagnetism, in a condensed and clear manner. Some solved and proposed exercises have been included. The reader is assumed to have knowledge of basic electricity, partial derivatives and multiple integrals. ----- O presente tutorial visa cobrir os fundamentos do eletromagnetismo, de forma condensada e clara. Alguns exercicios resolvidos e propostos foram incluidos. Assume-se conhecimento de eletricidade basica, derivadas parciais e integrais multiplas.

  12. Automated cutting in the food industry using computer vision

    KAUST Repository

    Daley, Wayne D R

    2012-01-01

    The processing of natural products has posed a significant problem to researchers and developers involved in the development of automation. The challenges have come from areas such as sensing, grasping and manipulation, as well as product-specific areas such as cutting and handling of meat products. Meat products are naturally variable and fixed automation is at its limit as far as its ability to accommodate these products. Intelligent automation systems (such as robots) are also challenged, mostly because of a lack of knowledge of the physical characteristic of the individual products. Machine vision has helped to address some of these shortcomings but underperforms in many situations. Developments in sensors, software and processing power are now offering capabilities that will help to make more of these problems tractable. In this chapter we will describe some of the developments that are underway in terms of computer vision for meat product applications, the problems they are addressing and potential future trends. © 2012 Woodhead Publishing Limited All rights reserved.

  13. Mechanical characterization of artificial muscles with computer vision

    Science.gov (United States)

    Verdu, R.; Morales-Sanchez, Juan; Fernandez-Romero, Antonio J.; Cortes, M. T.; Otero, Toribio F.; Weruaga-Prieto, Luis

    2002-07-01

    Conducting polymers are new materials that were developed in the late 1970s as intrinsically electronic conductors at the molecular level. The presence of polymer, solvent, and ionic components reminds one of the composition of the materials chosen by nature to produce muscles, neurons, and skin in living creatures. The ability to transform electrical energy into mechanical energy through an electrochemical reaction, promoting film swelling and shrinking during oxidation or reduction, respectively, produces a macroscopic change in its volume. On specially designed bi-layer polymeric stripes this conformational change gives rise to stripe curl and bending, where the position or angle of the free end of the polymeric stripe is directly related to the degree of oxidation, or charged consumed. Study of these curvature variations has been currently performed only in a manual basis. In this paper we propose a preliminary study of the polymeric muscle electromechanical properties by using a computer vision system. The vision system required is simple: it is composed of cameras for tracking the muscle from different angles and special algorithms, based on active contours, to analyse the deformable motion. Graphical results support the validity of this approach, which opens the way for performing automatic testing on artificial muscles with commercial purposes.

  14. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  15. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M

    2014-01-01

    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  16. Prediction of pork color attributes using computer vision system.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (Pcolor attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes.

  17. Jet-Images: Computer Vision Inspired Techniques for Jet Tagging

    CERN Document Server

    Cogan, Josh; Strauss, Emanuel; Schwarztman, Ariel

    2014-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon- initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  18. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  19. Automatic Plant Annotation Using 3D Computer Vision

    DEFF Research Database (Denmark)

    Nielsen, Michael

    in active shape modeling of weeds for weed detection. Occlusion and overlapping leaves were main problems for this kind of work. Using 3D computer vision it was possible to separate overlapping crop leaves from weed leaves using the 3D information from the disparity maps. The results of the 3D......In this thesis 3D reconstruction was investigated for application in precision agriculture where previous work focused on low resolution index maps where each pixel represents an area in the field and the index represents an overall crop status in that area. 3D reconstructions of plants would allow...... for more detailed descriptions of the state of the crops analogous to the way humans evaluate crop health, i.e. by looking at the canopy structure and check for discolorations at specific locations on the plants. Previous research in 3D reconstruction methods based on cameras has focused on rigid...

  20. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  1. Computer vision analysis of image motion by variational methods

    CERN Document Server

    Mitiche, Amar

    2014-01-01

    This book presents a unified view of image motion analysis under the variational framework. Variational methods, rooted in physics and mechanics, but appearing in many other domains, such as statistics, control, and computer vision, address a problem from an optimization standpoint, i.e., they formulate it as the optimization of an objective function or functional. The methods of image motion analysis described in this book use the calculus of variations to minimize (or maximize) an objective functional which transcribes all of the constraints that characterize the desired motion variables. The book addresses the four core subjects of motion analysis: Motion estimation, detection, tracking, and three-dimensional interpretation. Each topic is covered in a dedicated chapter. The presentation is prefaced by an introductory chapter which discusses the purpose of motion analysis. Further, a chapter is included which gives the basic tools and formulae related to curvature, Euler Lagrange equations, unconstrained de...

  2. Codesign Environment for Computer Vision Hw/Sw Systems

    Science.gov (United States)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  3. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  4. Localization System for a Mobile Robot Using Computer Vision Techniques

    Directory of Open Access Journals (Sweden)

    Rony Cruz Ramírez

    2012-05-01

    Full Text Available Mobile Robotics is a subject with multiple fields of action hence studies in this area are of vital importance. This paper describes the development of localization system for a mobile robot using Computer Vision. A webcam is placed at a height where the navigation environment can be seen. A LEGO NXT kit is used to build a wheeled mobile robot of differential drive configuration. The software is programmed in C++ using the functions library Open CV 2.0. this software then soft handles the webcam, does the processing of captured images, the calculation of the location, controls and communicates via Bluetooth. Also it implements a kinematic position control and performs several experiments to verify the reliability of the localization system. The results of one such experiment are described here.

  5. COMPUTER VISION IN THE TEMPLES OF KARNAK: PAST, PRESENT & FUTURE

    Directory of Open Access Journals (Sweden)

    V. Tournadre

    2017-05-01

    Full Text Available CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  6. Computer Vision in the Temples of Karnak: Past, Present & Future

    Science.gov (United States)

    Tournadre, V.; Labarta, C.; Megard, P.; Garric, A.; Saubestre, E.; Durand, B.

    2017-05-01

    CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  7. State-Estimation Algorithm Based on Computer Vision

    Science.gov (United States)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  8. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  9. A shape representation for computer vision based on differential topology.

    Science.gov (United States)

    Blicher, A P

    1995-01-01

    We describe a shape representation for use in computer vision, after a brief review of shape representation and object recognition in general. Our shape representation is based on graph structures derived from level sets whose characteristics are understood from differential topology, particularly singularity theory. This leads to a representation which is both stable and whose changes under deformation are simple. The latter allows smoothing in the representation domain ('symbolic smoothing'), which in turn can be used for coarse-to-fine strategies, or as a discrete analog of scale space. Essentially the same representation applies to an object embedded in 3-dimensional space as to one in the plane, and likewise for a 3D object and its silhouette. We suggest how this can be used for recognition.

  10. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  11. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  12. Computer Vision Aided Measurement of Morphological Features in Medical Optics

    Directory of Open Access Journals (Sweden)

    Bogdana Bologa

    2010-09-01

    Full Text Available This paper presents a computer vision aided method for non invasive interupupillary (IPD distance measurement. IPD is a morphological feature requirement in any oftalmological frame prescription. A good frame prescription is highly dependent nowadays on accurate IPD estimation in order for the lenses to be eye strain free. The idea is to replace the ruler or the pupilometer with a more accurate method while keeping the patient eye free from any moving or gaze restrictions. The method proposed in this paper uses a video camera and a punctual light source in order to determine the IPD with under millimeter error. The results are compared against standard eye and object detection routines from literature.

  13. Identification of cichlid fishes from Lake Malawi using computer vision.

    Directory of Open Access Journals (Sweden)

    Deokjin Joo

    Full Text Available BACKGROUND: The explosively radiating evolution of cichlid fishes of Lake Malawi has yielded an amazing number of haplochromine species estimated as many as 500 to 800 with a surprising degree of diversity not only in color and stripe pattern but also in the shape of jaw and body among them. As these morphological diversities have been a central subject of adaptive speciation and taxonomic classification, such high diversity could serve as a foundation for automation of species identification of cichlids. METHODOLOGY/PRINCIPAL FINDING: Here we demonstrate a method for automatic classification of the Lake Malawi cichlids based on computer vision and geometric morphometrics. For this end we developed a pipeline that integrates multiple image processing tools to automatically extract informative features of color and stripe patterns from a large set of photographic images of wild cichlids. The extracted information was evaluated by statistical classifiers Support Vector Machine and Random Forests. Both classifiers performed better when body shape information was added to the feature of color and stripe. Besides the coloration and stripe pattern, body shape variables boosted the accuracy of classification by about 10%. The programs were able to classify 594 live cichlid individuals belonging to 12 different classes (species and sexes with an average accuracy of 78%, contrasting to a mere 42% success rate by human eyes. The variables that contributed most to the accuracy were body height and the hue of the most frequent color. CONCLUSIONS: Computer vision showed a notable performance in extracting information from the color and stripe patterns of Lake Malawi cichlids although the information was not enough for errorless species identification. Our results indicate that there appears an unavoidable difficulty in automatic species identification of cichlid fishes, which may arise from short divergence times and gene flow between closely related species.

  14. Screening for diabetic retinopathy using computer vision and physiological markers.

    Science.gov (United States)

    Hann, Christopher E; Revie, James A; Hewett, Darren; Chase, J Geoffrey; Shaw, Geoffrey M

    2009-07-01

    Hyperglycemia and diabetes result in vascular complications, most notably diabetic retinopathy (DR). The prevalence of DR is growing and is a leading cause of blindness and/or visual impairment in developed countries. Current methods of detecting, screening, and monitoring DR are based on subjective human evaluation, which is also slow and time-consuming. As a result, initiation and progress monitoring of DR is clinically hard. Computer vision methods are developed to isolate and detect two of the most common DR dysfunctions-dot hemorrhages (DH) and exudates. The algorithms use specific color channels and segmentation methods to separate these DR manifestations from physiological features in digital fundus images. The algorithms are tested on the first 100 images from a published database. The diagnostic outcome and the resulting positive and negative prediction values (PPV and NPV) are reported. The first 50 images are marked with specialist determined ground truth for each individual exudate and/or DH, which are also compared to algorithm identification. Exudate identification had 96.7% sensitivity and 94.9% specificity for diagnosis (PPV = 97%, NPV = 95%). Dot hemorrhage identification had 98.7% sensitivity and 100% specificity (PPV = 100%, NPV = 96%). Greater than 95% of ground truth identified exudates, and DHs were found by the algorithm in the marked first 50 images, with less than 0.5% false positives. A direct computer vision approach enabled high-quality identification of exudates and DHs in an independent data set of fundus images. The methods are readily generalizable to other clinical manifestations of DR. The results justify a blinded clinical trial of the system to prove its capability to detect, diagnose, and, over the long term, monitor the state of DR in individuals with diabetes. Copyright 2009 Diabetes Technology Society.

  15. Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.

    Science.gov (United States)

    Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar

    2016-05-01

    Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.

  16. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    Science.gov (United States)

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  17. NVidia Tutorial

    CERN Document Server

    CERN. Geneva; MESSMER, Peter; DEMOUTH, Julien

    2015-01-01

    This tutorial will present Caffee, a powerful Python library to implement solutions working on CPUs and GPUs, and explain how to use it to build and train Convolutional Neural Networks using NVIDIA GPUs. The session requires no prior experience with GPUs or Caffee.

  18. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    Science.gov (United States)

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  19. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    There is an increasing interest in using 3D computer vision in precision agriculture. This calls for better quantitative evaluation and understanding of computer vision methods. This paper proposes a test framework using ray traced crop scenes that allows in-depth analysis of algorithm performance...

  20. Computer vision for foreign body detection and removal in the food industry

    Science.gov (United States)

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  1. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    Science.gov (United States)

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  2. The face of an imposter: computer vision for deception detection research in progress

    NARCIS (Netherlands)

    Elkins, Aaron C.; Sun, Yijia; Zafeiriou, Stefanos; Pantic, Maja

    2013-01-01

    Using video analyzed from a novel deception experiment, this paper introduces computer vision research in progress that addresses two critical components to computational modeling of deceptive behavior: 1) individual nonverbal behavior differences, and 2) deceptive ground truth. Video interviews ana

  3. Blink rate, incomplete blinks and computer vision syndrome.

    Science.gov (United States)

    Portello, Joan K; Rosenfield, Mark; Chu, Christina A

    2013-05-01

    Computer vision syndrome (CVS), a highly prevalent condition, is frequently associated with dry eye disorders. Furthermore, a reduced blink rate has been observed during computer use. The present study examined whether post task ocular and visual symptoms are associated with either a decreased blink rate or a higher prevalence of incomplete blinks. An additional trial tested whether increasing the blink rate would reduce CVS symptoms. Subjects (N = 21) were required to perform a continuous 15-minute reading task on a desktop computer at a viewing distance of 50 cm. Subjects were videotaped during the task to determine their blink rate and amplitude. Immediately after the task, subjects completed a questionnaire regarding ocular symptoms experienced during the trial. In a second session, the blink rate was increased by means of an audible tone that sounded every 4 seconds, with subjects being instructed to blink on hearing the tone. The mean blink rate during the task without the audible tone was 11.6 blinks per minute (SD, 7.84). The percentage of blinks deemed incomplete for each subject ranged from 0.9 to 56.5%, with a mean of 16.1% (SD, 15.7). A significant positive correlation was observed between the total symptom score and the percentage of incomplete blinks during the task (p = 0.002). Furthermore, a significant negative correlation was noted between the blink score and symptoms (p = 0.035). Increasing the mean blink rate to 23.5 blinks per minute by means of the audible tone did not produce a significant change in the symptom score. Whereas CVS symptoms are associated with a reduced blink rate, the completeness of the blink may be equally significant. Because instructing a patient to increase his or her blink rate may be ineffective or impractical, actions to achieve complete corneal coverage during blinking may be more helpful in alleviating symptoms during computer operation.

  4. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  5. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  6. TMVA tutorial

    CERN Document Server

    CERN. Geneva; VOSS, Helge

    2015-01-01

    This tutorial will both give an introduction on how to use TMVA in root6 and showcase some new features, such as modularity, variable importance, interfaces to R and python. After explaining the basic functionality, the typical steps required during a real life application (such as variable selection, pre-processing, tuning and classifier evaluation) will be demonstrated on simple examples. First part of the tutorial will use the usual Root interface (please make sure you have Root 6.04 installed somewhere). The second part will utilize the new server notebook functionality of Root as a Service. If you are within CERN but outside the venue or outside CERN please consult the notes attached.

  7. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    Directory of Open Access Journals (Sweden)

    Kevin A. Hallgren

    2012-02-01

    Full Text Available Many research designs require the assessment of inter-rater reliability (IRR to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR.

  8. Atoms of recognition in human and computer vision.

    Science.gov (United States)

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-01

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  9. A computer vision based candidate for functional balance test.

    Science.gov (United States)

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  10. Selection of Norway spruce somatic embryos by computer vision

    Science.gov (United States)

    Hamalainen, Jari J.; Jokinen, Kari J.

    1993-05-01

    A computer vision system was developed for the classification of plant somatic embryos. The embryos are in a Petri dish that is transferred with constant speed and they are recognized as they pass a line scan camera. A classification algorithm needs to be installed for every plant species. This paper describes an algorithm for the recognition of Norway spruce (Picea abies) embryos. A short review of conifer micropropagation by somatic embryogenesis is also given. The recognition algorithm is based on features calculated from the boundary of the object. Only part of the boundary corresponding to the developing cotyledons (2 - 15) and the straight sides of the embryo are used for recognition. An index of the length of the cotyledons describes the developmental stage of the embryo. The testing set for classifier performance consisted of 118 embryos and 478 nonembryos. With the classification tolerances chosen 69% of the objects classified as embryos by a human classifier were selected and 31$% rejected. Less than 1% of the nonembryos were classified as embryos. The basic features developed can probably be easily adapted for the recognition of other conifer somatic embryos.

  11. A Computer Vision Approach to Identify Einstein Rings and Arcs

    Science.gov (United States)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  12. Measurement of meat color using a computer vision system.

    Science.gov (United States)

    Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada

    2013-01-01

    The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (Pcolors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (Pcolors the panelists found significant differences between them (Pcolor of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (Pcolor of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one.

  13. Computer Vision-Based Portable System for Nitroaromatics Discrimination

    Directory of Open Access Journals (Sweden)

    Nuria López-Ruiz

    2016-01-01

    Full Text Available A computer vision-based portable measurement system is presented in this report. The system is based on a compact reader unit composed of a microcamera and a Raspberry Pi board as control unit. This reader can acquire and process images of a sensor array formed by four nonselective sensing chemistries. Processing these array images it is possible to identify and quantify eight different nitroaromatic compounds (both explosives and related compounds by using chromatic coordinates of a color space. The system is also capable of sending the obtained information after the processing by a WiFi link to a smartphone in order to present the analysis result to the final user. The identification and quantification algorithm programmed in the Raspberry board is easy and quick enough to allow real time analysis. Nitroaromatic compounds analyzed in the range of mg/L were picric acid, 2,4-dinitrotoluene (2,4-DNT, 1,3-dinitrobenzene (1,3-DNB, 3,5-dinitrobenzonitrile (3,5-DNBN, 2-chloro-3,5-dinitrobenzotrifluoride (2-C-3,5-DNBF, 1,3,5-trinitrobenzene (TNB, 2,4,6-trinitrotoluene (TNT, and tetryl (TT.

  14. Traffic light detection and intersection crossing using mobile computer vision

    Science.gov (United States)

    Grewei, Lynne; Lagali, Christopher

    2017-05-01

    The solution for Intersection Detection and Crossing to support the development of blindBike an assisted biking system for the visually impaired is discussed. Traffic light detection and intersection crossing are key needs in the task of biking. These problems are tackled through the use of mobile computer vision, in the form of a mobile application on an Android phone. This research builds on previous Traffic Light detection algorithms with a focus on efficiency and compatibility on a resource-limited platform. Light detection is achieved through blob detection algorithms utilizing training data to detect patterns of Red, Green and Yellow in complex real world scenarios where multiple lights may be present. Also, issues of obscurity and scale are addressed. Safe Intersection crossing in blindBike is also discussed. This module takes a conservative "assistive" technology approach. To achieve this blindBike use's not only the Android device but, an external bike cadence Bluetooth/Ant enabled sensor. Real world testing results are given and future work is discussed.

  15. Computer Vision Malaria Diagnostic Systems—Progress and Prospects

    Directory of Open Access Journals (Sweden)

    Joseph Joel Pollak

    2017-08-01

    Full Text Available Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  16. Defense Data Network/TOPS-20 Tutorial. An Interative Computer Program.

    Science.gov (United States)

    1985-12-01

    GRUP SB-GOUP Defense Data Network, DDN, TOPS-20, computer FIEL GRUP SB-GOUP networking. 19 ABSTRACT (Continue on roverse if necessary and identify by...switching network dedicated to meeting the data communica- tion requirements of the DoD. The network is subdivided into " two functional areas: (1) the

  17. A Comparison of Computer-Assisted Instruction and Tutorials in Hematology and Oncology.

    Science.gov (United States)

    Garrett, T. J.; And Others

    1987-01-01

    A study comparing the effectiveness of computer-assisted instruction (CAI) and small group instruction found no significant difference in medical student achievement in oncology but higher achievement through small-group instruction in hematology. Students did not view CAI as more effective, but saw it as a supplement to traditional methods. (MSE)

  18. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L.P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  19. 3D computer vision using Point Grey Research stereo vision cameras

    Institute of Scientific and Technical Information of China (English)

    Don Murray; Vlad Tucakov; WEI Xiong

    2008-01-01

    This paper provides an introduction to stereo vision systems designed by point grey research and describes the possible application of these types of systems. The paper presents an overview of stereo vision techniques and outlines the critical aspects of putting together a system that can perform in the real world. It also provides an overview of how the cameras can be used to facilitate stereo research.

  20. An On-Line Tutorial for the Administrative Sciences Personal Computer Laboratory.

    Science.gov (United States)

    1987-09-01

    Sciences Personal Computer Laboratory by Karen M. Overall Lieutcnant, United States Navy B.S., Eastern New Mexico University, 1979 Submitted in partial...PC Storyboard is a software package that generates automated presentations on an IBM PC or compatible. It allows creation of screen displays of text...figures, charts, or graphics. Then lets you organize them into stories for presentation with a wide variety of special effects. PC Storyboard consists

  1. A CLINICAL STUDY TO EVALUATE THE ROLE OF AKSHITARPANA, SHIRODHARA AND AN AYURVEDIC COMPOUND IN CHILDHOOD COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Singh Omendra Pal

    2011-03-01

    Full Text Available Computer vision syndrome is one among the lifestyle disorders in children. About 88% of people who use computers everyday suffer from this problem and children are no exception. Computer Vision Syndrome (CVS is the complex of eye and vision problems related to near works which are experienced during the use of Video Display Terminals (TV and computers. Therefore, considering these prospects a randomized double blind placebo control study was conducted among 40 clinically diagnosed children (5-15 years age group of computer Vision Syndrome to evaluate the role of akshitarpana, shirodhara and an ayurvedic compound in childhood computer vision syndrome.

  2. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  3. Learning openCV computer vision with the openCV library

    CERN Document Server

    Bradski, Gary

    2008-01-01

    Learning OpenCV puts you right in the middle of the rapidly expanding field of computer vision. Written by the creators of OpenCV, the widely used free open-source library, this book introduces you to computer vision and demonstrates how you can quickly build applications that enable computers to see" and make decisions based on the data. With this book, any developer or hobbyist can get up and running with the framework quickly, whether it's to build simple or sophisticated vision applications

  4. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    Science.gov (United States)

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  5. Perceptions of online tutorials for distance learning in mathematics and computing

    Directory of Open Access Journals (Sweden)

    Tim Lowe

    2016-07-01

    Full Text Available We report on student and staff perceptions of synchronous online teaching and learning sessions in mathematics and computing. The study is based on two surveys of students and tutors conducted 5 years apart, and focusses on the educational experience as well as societal and accessibility dimensions. Key conclusions are that both staff and students value online sessions, to supplement face-to-face sessions, mainly for their convenience, but interaction within the sessions is limited. Students find the recording of sessions particularly helpful in their studies.

  6. Dynamic programming and graph algorithms in computer vision.

    Science.gov (United States)

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  7. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  8. A tutorial introduction to Bayesian inference for stochastic epidemic models using Approximate Bayesian Computation.

    Science.gov (United States)

    Kypraios, Theodore; Neal, Peter; Prangle, Dennis

    2017-05-01

    Likelihood-based inference for disease outbreak data can be very challenging due to the inherent dependence of the data and the fact that they are usually incomplete. In this paper we review recent Approximate Bayesian Computation (ABC) methods for the analysis of such data by fitting to them stochastic epidemic models without having to calculate the likelihood of the observed data. We consider both non-temporal and temporal-data and illustrate the methods with a number of examples featuring different models and datasets. In addition, we present extensions to existing algorithms which are easy to implement and provide an improvement to the existing methodology. Finally, R code to implement the algorithms presented in the paper is available on https://github.com/kypraios/epiABC. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  10. PENGEMBANGAN COMPUTER VISION SYSTEM SEDERHANA UNTUK MENENTUKAN KUALITAS TOMAT Development of a simple Computer Vision System to determine tomato quality

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2012-05-01

    Full Text Available The purpose of this research was to develop a simple computer vision system (CVS to non-destructively measure tomato quality based on its Red Gren Blue (RGB color parameter. Tomato quality parameters measured were Brix, citric acid, vitamin C, and total sugar. This system consisted of a box to place object, a webcam to capture images, a computer to process images, illumination system, and an image analysis software which was equipped with artificial neural networks technique for determining tomato quality. Network architecture was formed with 3 layers consisting of1 input layer with 3 input neurons, 1 hidden layer with 14 neurons using logsig activation function, and 5 output layers using purelin activation function by using backpropagation training algorithm. CVS developed was able to predict the quality parameters of a Brix value, vitamin C, citric acid, and total sugar. To obtain the predicted values which were equal or close to the actual values, a calibration model was required. For Brix value, the actual value obtained from the equation y = 12,16x – 26,46, with x was Brix predicted. The actual values of vitamin C, citric acid, and total sugar were obtained from y = 1,09x - 3.13, y = 7,35x – 19,44,  and  y = 1.58x – 0,18,, with x was the value of vitamin C, citric acid, and total sugar, respectively. ABSTRAK Tujuan penelitian adalah mengembangkan computer vision system (CVS sederhana untuk menentukan kualitas tomat secara non­destruktif berdasarkan parameter warna Red Green Blue (RGB. Parameter kualitas tomat yang diukur ada­ lah Brix, asam sitrat, vitamin C, dan gula total. Sistem ini terdiri peralatan utama yaitu kotak untuk meletakkan obyek, webcam untuk menangkap citra, komputer untuk mengolah data, sistem penerangan, dan perangkat lunak analisis citra yang dilengkapi dengan jaringan syaraf tiruan untuk menentukan kualitas tomat. Arsitektur jaringan dibentuk dengan3 lapisan yang terdiri dari 1 lapisan masukan dengan 3 sel

  11. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  12. Pharmacometrics and Systems Pharmacology Software Tutorials and Use: Comments and Guidelines for PSP Contributions.

    Science.gov (United States)

    Vicini, P; Friberg, L E; van der Graaf, P H; Rostami-Hodjegan, A

    2013-12-18

    In addition to methodological Tutorials,(1) CPT:PSP has recently started to publish software Tutorials.(2,3) Our readership and authors may be wondering what kind of format or product is expected, and the review of submissions we have already received prompted several discussions within the PSP Editorial Team. This editorial reflects on these discussions and summarizes their salient points. It aims at providing some details about the current vision of CPT:PSP for software tutorial articles. In addition, it brings some clarity on the topic of what role commercial software tutorials can have in CPT:PSP and how CPT:PSP tutorials differ from publications which describe the software itself, as those which can be found in other computer science journals. Finally, the discussion includes reproducibility considerations and the general use of commercial and noncommercial software in CPT:PSP publications. We hope our thoughts, and especially a stated requirement to publish user input to the software to aid in reproducibility, will help in guiding our authors and will stimulate healthy debate among our readers about the evolving nature of our science, how it can be facilitated using software and associated databases as a conduit, and what role this journal can play in fostering both the best modeling and simulation practices and the best scientific approaches to computational modeling, to bring the advantages of modeling and simulation to all regular practitioners, and not to just a (self) selected few.

  13. Furnance grate monitoring by computer vision; Rosteroevervakning med bildanalys

    Energy Technology Data Exchange (ETDEWEB)

    Blom, Elisabet; Gustafsson, Bengt; Olsson, Magnus

    2005-01-01

    During the last couple of year's computer vision has developed a lot beside computers and video technic. This makes it technical and economical possible to use cameras as a monitoring instrument. The first experiments with this type of equipment were made in the early 1990s. Most of the experiments were made to measure the bed length from the back of the grate. In this experiment the cameras were mounted in the front instead. The highest priority was to detect the topography of the fuel bed. An uneven fuel bed means combustion with local temperature variations that do the combustion more difficult to control. The goal was to show possibilities to measure fuel bed highs, particle size and combustion intensity or the combustion spreading with pictures from one or two cameras. The test was done in a bark-fuelled boiler in Karlsborg because that boiler has doors from the fuel feeding side suitable for looking down on the grate. The results shows that the cameras mounting that were done in Karlsborg were not good enough to do a 3D calculation of the fuel bed. It was however possible to se the drying and it was possible to see the flames in the pictures. To see the flames and steam without over exposure because of different light in different points, it is possible to use a filter or an on linear sensibility camera. To test if a parallel mounting of the two cameras would work a cold test were done in the grate test facility at KMW in Norrtaelje. With the pictures from this test we were able to do 3D measurements of the bed topography. The conclusions are that it is possible to measure bed height and bed topography with other camera positions than we were able to use in this experiment. The particle size is easier to measure before entering the boiler for examples over a rim were the particles falling down. It is also possible to estimate a temperature zone were the steam goes off.

  14. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  15. Crossing the divide between computer vision and data bases in search of image data bases

    NARCIS (Netherlands)

    M. Worring; A.W.M. Smeulders

    1998-01-01

    Image databases call upon the combined effort of computing vision and database technology to advance beyond exemplary systems. In this paper we charter several areas for mutually beneficial research activities and provide an architectural design to accommodate it.

  16. Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms

    Science.gov (United States)

    2015-05-01

    workloads specified using OpenVX to be supported in a predictable way. I. INTRODUCTION In the automotive industry today, vision-based sensing through cameras...Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms Glenn A. Elliott, Kecheng Yang, and James H. Anderson...Department of Computer Science, University of North Carolina at Chapel Hill Abstract—In the automotive industry, there is currently great interest in

  17. GpuCV : a GPU-accelerated framework for image processing and computer vision

    OpenAIRE

    ALLUSSE, Yannick; Horain, Patrick; Agarwal, Ankit; Saipriyadarshan, Cindula

    2008-01-01

    International audience; This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and applications. It is meant for computer vision scientist not familiar with GPU technologies. GpuCV is designed to be compatible with the popular OpenCV library by offering GPU-accelera...

  18. CloudCV: Deep Learning and Computer Vision on the Cloud

    OpenAIRE

    Agrawal, Harsh

    2016-01-01

    We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely chall...

  19. Computer Vision Approach for Low Cost, High Precision Measurement of Grapevine Trunk Diameter in Outdoor Conditions

    OpenAIRE

    Pérez, Diego Sebastián; Bromberg, Facundo; Antivilo, Francisco Gonzalez

    2014-01-01

    Trunk diameter is a variable of agricultural interest, used mainly in the prediction of fruit trees production. It is correlated with leaf area and biomass of trees, and consequently gives a good estimate of the potential production of the plants. This work presents a low cost, high precision method for the measurement of trunk diameter of grapevines based on Computer Vision techniques. Several methods based on Computer Vision and other techniques are introduced in the literature. These metho...

  20. Smartphone, tablet computer and e-reader use by people with vision impairment.

    Science.gov (United States)

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  1. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...

  2. Development of a wireless computer vision instrument to detect biotic stress in wheat.

    Science.gov (United States)

    Casanova, Joaquin J; O'Shaughnessy, Susan A; Evett, Steven R; Rush, Charles M

    2014-09-23

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  3. A study of computer-related upper limb discomfort and computer vision syndrome.

    Science.gov (United States)

    Sen, A; Richardson, Stanley

    2007-12-01

    Personal computers are one of the commonest office tools in Malaysia today. Their usage, even for three hours per day, leads to a health risk of developing Occupational Overuse Syndrome (OOS), Computer Vision Syndrome (CVS), low back pain, tension headaches and psychosocial stress. The study was conducted to investigate how a multiethnic society in Malaysia is coping with these problems that are increasing at a phenomenal rate in the west. This study investigated computer usage, awareness of ergonomic modifications of computer furniture and peripherals, symptoms of CVS and risk of developing OOS. A cross-sectional questionnaire study of 136 computer users was conducted on a sample population of university students and office staff. A 'Modified Rapid Upper Limb Assessment (RULA) for office work' technique was used for evaluation of OOS. The prevalence of CVS was surveyed incorporating a 10-point scoring system for each of its various symptoms. It was found that many were using standard keyboard and mouse without any ergonomic modifications. Around 50% of those with some low back pain did not have an adjustable backrest. Many users had higher RULA scores of the wrist and neck suggesting increased risk of developing OOS, which needed further intervention. Many (64%) were using refractive corrections and still had high scores of CVS commonly including eye fatigue, headache and burning sensation. The increase of CVS scores (suggesting more subjective symptoms) correlated with increase in computer usage spells. It was concluded that further onsite studies are needed, to follow up this survey to decrease the risks of developing CVS and OOS amongst young computer users.

  4. Road Recognition for Vision Navigation of Robot by Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Jagadeesh Thati,

    2011-07-01

    Full Text Available This paper presents a method for vision navigation of robot by road recognition based on image processing. By taking advantages of the unique structure in road images, the square images on road can be scanned while the robot is moving. In this paper we focused on the pixel position of the images of the corners of the two squares. Large scale experiments on road sequences shows the road detection method is four coordinate system, road types and scenarios. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.

  5. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...... inherent problems explicit and describe potential design decisions for artificial visual systems to deal with the dilemmas....

  6. FIN 370new UOP Course Tutorial/TutorialRank

    OpenAIRE

    2015-01-01

    For more course tutorials visit www.tutorialrank.com Tutorial Purchased: 24 Times, Rating: A+   1.Which of the following is true regarding Investment Banks? 2. We compute the profitability index of a capital-budgeting proposal by Initial outlay = $1,748.80 3. Project Sigma requires an investment of $1 million and has a NPV of $10. Project Delta requires an investment of $500,000 and has a NPV of $150,000. The projects involve unrelated new product lines. What ...

  7. On the Recognition-by-Components Approach Applied to Computer Vision

    Science.gov (United States)

    Baessmann, Henning; Besslich, Philipp W.

    1990-03-01

    The human visual system is usually able to recognize objects as well as their spatial relations without the support of depth information like stereo vision. For this reason we can easily understand cartoons, photographs and movies. It is the aim of our current research to exploit this aspect of human perception in the context of computer vision. From a monocular TV image we obtain information about the type of an object observed in the scene and its position relative to the camera (viewpoint). This paper deals with the theory of human image understanding as far as used in this system and describes the realization of a vision system based on these principles.

  8. A Model of an Expert Computer Vision and Recognition Facility with Applications of a Proportion Technique.

    Science.gov (United States)

    2014-09-26

    of research is being 14 function called WHATISFACE. [Rhodes][Tucker][ Hogg ][Sowa] The model offering the most specific information about structure and...1983. Hogg , D., "Model-based vision: a program to see a walking person", from "Image and Vision Computing", Vol. 1, No. 1, February 1983, pp. 5-20...Systems", Addison-Wesley Publishing Company, Inc., Massachusetts, 1983. Hogg , D., "Model-based vision: a program to see a walking person", from "Image

  9. A Computer Vision Method for 3D Reconstruction of Curves-Marked Free-Form Surfaces

    Institute of Scientific and Technical Information of China (English)

    Xiong Hanwei; Zhang Xiangwei

    2001-01-01

    Visual method is now broadly used in reverse engineering for 3D reconstruction. Thetraditional computer vision methods are feature-based, i.e., they require that the objects must revealfeatures owing to geometry or textures. For textureless free-form surfaces, dense feature points areadded artificially. In this paper, a new method is put forward combining computer vision with CAGD.The surface is subdivided into N-side Gregory patches using marked curves, and a stereo algorithm isused to reconstruct the curves. Then, the cross boundary tangent vector is computed throughreflectance analysis. At last, the whole surface can be reconstructed by jointing these patches withG1 continuity.

  10. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  11. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  12. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    Science.gov (United States)

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  13. Cyborg systems as platforms for computer-vision algorithm-development for astrobiology

    Science.gov (United States)

    McGuire, Patrick Charles; Rodríguez Manfredi, José Antonio; Martínez, Eduardo Sebastián; Gómez Elvira, Javier; Díaz Martínez, Enrique; Ormö, Jens; Neuffer, Kai; Giaquinta, Antonino; Camps Martínez, Fernando; Lepinette Malvitte, Alain; Pérez Mercader, Juan; Ritter, Helge; Oesker, Markus; Ontrup, Jörg; Walter, Jörg

    2004-03-01

    Employing the allegorical imagery from the film "The Matrix", we motivate and discuss our "Cyborg Astrobiologist" research program. In this research program, we are using a wearable computer and video camcorder in order to test and train a computer-vision system to be a field-geologist and field-astrobiologist.

  14. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    Science.gov (United States)

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-04

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  15. Computer vision and color measurement techniques for inline monitoring of cheese curd syneresis.

    Science.gov (United States)

    Everard, C D; O'Callaghan, D J; Fagan, C C; O'Donnell, C P; Castillo, M; Payne, F A

    2007-07-01

    Optical characteristics of stirred curd were simultaneously monitored during syneresis in a 10-L cheese vat using computer vision and colorimetric measurements. Curd syneresis kinetic conditions were varied using 2 levels of milk pH (6.0 and 6.5) and 2 agitation speeds (12.1 and 27.2 rpm). Measured optical parameters were compared with gravimetric measurements of syneresis, taken simultaneously. The results showed that computer vision and colorimeter measurements have potential for monitoring syneresis. The 2 different phases, curd and whey, were distinguished by means of color differences. As syneresis progressed, the backscattered light became increasingly yellow in hue for circa 20 min for the higher stirring speed and circa 30 min for the lower stirring speed. Syneresis-related gravimetric measurements of importance to cheese making (e.g., curd moisture content, total solids in whey, and yield of whey) correlated significantly with computer vision and colorimetric measurements.

  16. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    cylinder by the use of computer vision (or image analysis). Moreover, the imagery data sets, generated as a result of actual recordings of sorting experiments using the indented cylinder, are novel by their high dimensionality and size. Paper II in Appendix B makes one of these data sets available online......The research summarised in this PhD thesis took advantage of methods from computer vision to experimentally analyse the sorting/separation ability of a specific type of seed sorting device – known as an “indented cylinder”. The indented cylinder basically separates incoming seeds into two sub......-groups: (1) “long” seeds and (2) “short” seeds (known as length-separation). The motion of seeds being physically manipulated inside an active indented cylinder was analysed using various computer vision methods. The data from such analyses were used to create an overview of the machine’s ability to separate...

  17. Implementation of Water Quality Management by Fish School Detection Based on Computer Vision Technology

    Directory of Open Access Journals (Sweden)

    Yan Hou

    2015-08-01

    Full Text Available To solve the detection of abnormal water quality, this study proposed a biological water abnormity detection method based on computer vision technology combined with Support Vector Machine (SVM. First, computer vision is used to acquire the parameters of fish school motion feature which can reflect the water quality and then these parameters were preprocessed. Next, the sample set is established and the water quality abnormity monitoring model based on computer vision technology combined with SVM is acquired. At last, the model is used to analyze and evaluate the motion characteristic parameters of fish school under unknown water, in order to indirectly monitor the situation of water quality. In view of great influence of kernel function and parameter optimization to the model, this study compared different kinds of kernel function and then made optimization selection using Particle Swarm Optimization (PSO, Genetic Algorithm (GA and grid search. The results obtained demonstrate that, that method is effective for monitoring water quality abnormity.

  18. Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision.

    Science.gov (United States)

    Warren, William H

    2012-01-01

    David Marr's book Vision attempted to formulate athoroughgoing formal theory of perception. Marr borrowed much of the "computational" level from James Gibson: a proper understanding of the goal of vision, the natural constraints, and the available information are prerequisite to describing the processes and mechanisms by which the goal is achieved. Yet, as a research program leading to a computational model of human vision, Marr's program did not succeed. This article asks why, using the perception of 3D shape as a morality tale. Marr presumed that the goal of vision is to recover a general-purpose Euclidean description of the world, which can be deployed for any task or action. On this formulation, vision is underdetermined by information, which in turn necessitates auxiliary assumptions to solve the problem. But Marr's assumptions did not actually reflect natural constraints, and consequently the solutions were not robust. We now know that humans do not in fact recover Euclidean structure--rather, they reliably perceive qualitative shape (hills, dales, courses, ridges), which is specified by the second-order differential structure of images. By recasting the goals of vision in terms of our perceptual competencies, and doing the hard work of analyzing the information available under ecological constraints, we can reformulate the problem so that perception is determined by information and prior knowledge is unnecessary.

  19. An innovative road marking quality assessment mechanism using computer vision

    Directory of Open Access Journals (Sweden)

    Kuo-Liang Lin

    2016-06-01

    Full Text Available Aesthetic quality acceptance for road marking works has been relied on subjective visual examination. Due to a lack of quantitative operation procedures, acceptance outcome can be biased and results in great quality variation. To improve aesthetic quality acceptance procedure of road marking, we develop an innovative road marking quality assessment mechanism, utilizing machine vision technologies. Using edge smoothness as a quantitative aesthetic indicator, the proposed prototype system first receives digital images of finished road marking surface and has the images processed and analyzed to capture the geometric characteristics of the marking. The geometric characteristics are then evaluated to determine the quality level of the finished work. System is demonstrated through two real cases to show how it works. In the end, a test comparing the assessment results between the proposed system and expert inspection is conducted to enhance the accountability of the proposed mechanism.

  20. Image-plane processing for improved computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  1. Image-plane processing for improved computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  2. A Novel Solar Tracker Based on Omnidirectional Computer Vision

    Directory of Open Access Journals (Sweden)

    Zakaria El Kadmiri

    2015-01-01

    Full Text Available This paper presents a novel solar tracker system based on omnidirectional vision technology. The analysis of acquired images with a catadioptric camera allows extracting accurate information about the sun position toward both elevation and azimuth. The main advantages of this system are its wide field of tracking of 360° horizontally and 200° vertically. The system has the ability to track the sun in real time independently of the spatiotemporal coordinates of the site. The extracted information is used to control the two DC motors of the dual-axis mechanism to achieve the optimal orientation of the photovoltaic panels with the aim of increasing the power generation. Several experimental studies have been conducted and the obtained results confirm the power generation efficiency of the proposed solar tracker.

  3. Big data computing: Building a vision for ARS information management

    Science.gov (United States)

    Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...

  4. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    Science.gov (United States)

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    Directory of Open Access Journals (Sweden)

    Joaquin J. Casanova

    2014-09-01

    Full Text Available Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM. In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV, vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32 than stressed wheat (111.34. In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014, as did the conventional camera (p < 0.0001. Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  6. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  7. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  8. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    physical health of Indian users especially among college students. Hence, this study was ..... temporary discomfort reduces the efficiency of work and thereby productivity. Health .... computer use, physical activity, stress, and depression among.

  9. Computer and visual display terminals (VDT) vision syndrome (CVDTS)

    National Research Council Canada - National Science Library

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-01-01

    .... However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals...

  10. Computer and visual display terminals (VDT) vision syndrome (CVDTS)

    OpenAIRE

    Parihar, J.K.S.; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K.S.

    2016-01-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the wide...

  11. Computer vision syndrome: a review of ocular causes and potential treatments.

    Science.gov (United States)

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation.

  12. Former Food Products Safety Evaluation: Computer Vision as an Innovative Approach for the Packaging Remnants Detection

    Directory of Open Access Journals (Sweden)

    Marco Tretola

    2017-01-01

    Full Text Available Former food products (FFPs represent a way by which leftovers from the food industry (e.g., biscuits, bread, breakfast cereals, chocolate bars, pasta, savoury snacks, and sweets are converted into ingredients for the feed industry, thereby keeping food losses in the food chain. FFPs represent an alternative source of nutrients for animal feeding. However, beyond their nutritional value, the use of FFPs in animal feeding implies also safety issues, such as those related to the presence of packaging remnants. These contaminants might reside in FFP during food processing (e.g., collection, unpacking, mixing, grinding, and drying. Nowadays, artificial senses are widely used for the detection of foreign material in food and all of them involve computer vision. Computer vision technique provides detailed pixel-based characterizations of colours spectrum of food products, suitable for quality evaluation. The application of computer vision for a rapid qualitative screening of FFP’s safety features, in particular for the detection of packaging remnants, has been recently tested. This paper presents the basic principles, the advantages, and disadvantages of the computer vision method with an evaluation of its potential in the detection of packaging remnants in FFP.

  13. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  14. Target-less computer vision for traffic signal structure vibration studies

    Science.gov (United States)

    Bartilson, Daniel T.; Wieghaus, Kyle T.; Hurlebaus, Stefan

    2015-08-01

    The presented computer vision method allows for non-contact, target-less determination of traffic signal structure displacement and modal parameters, including mode shapes. By using an analytical model to relate structural displacement to stress, it is shown possible to utilize a rapid set-up and take-down computer vision-based system to infer structural stresses to a high degree of precision. Using this computer vision method, natural frequencies of the structure are determined with accuracy similar to strain gage and string potentiometer instrumentation. Even with structural displacements measured at less than 0.5 pixel, excellent mode shape results are obtained. Finally, one-minute equivalent stress ranges from ambient wind excitation are found to have excellent agreement between the inferred stress from strain gage data and stresses calculated from computer vision tied to an analytical stress model. This demonstrates the ability of this method and implemented system to develop fatigue life estimates using wind velocity data and modest technical means.

  15. Computer vision and augmented reality in gastrointestinal endoscopy

    Science.gov (United States)

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.

    2015-01-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175

  16. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage......, the methods are primarily based on statistical analysis and inference. The regression statistics (i.e. R2) of the comparisons of system predictions and manual counts are 0.987 for counting honeybees, and 0.953 and 0.888 for measuring in-activity and out-activity, respectively. The experimental results...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  17. Computer vision and augmented reality in gastrointestinal endoscopy.

    Science.gov (United States)

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M

    2015-08-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy-which relies on the integration of high-definition video data with pathologic correlates-requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. © The Author(s) 2015. Published by Oxford University Press and the Digestive Science Publishing Co. Limited.

  18. Computer Vision Syndrome in Eleven to Eighteen-Year-Old Students in Qazvin

    Directory of Open Access Journals (Sweden)

    Khalaj

    2015-08-01

    Full Text Available Background Prolonged use of computers can lead to complications such as eye strain, eye and head aches, double and blurred vision, tired eyes, irritation, burning and itching eyes, eye redness, light sensitivity, dry eyes, muscle strains, and other problems. Objectives The aim of the present study was to evaluate visual problems and major symptoms, and their associations among computer users, aged between 11 and 18 years old, residing in the Qazvin city of Iran, during year 2010. Patients and Methods This cross-sectional study was done on 642 secondary to pre university students who had referred to the eye clinic of Buali hospital of Qazvin during year 2013. A questionnaire consisting of demographic information and 26 questions on visual effects of the computer was used to gather information. Participants answered all questions and then underwent complete eye examinations and in some cases cycloplegic refraction. Visual acuity (VA was measured with a logMAR in six meters. Refraction errors were determined using an auto refractometer (Potece and Heine retinoscope. The collected data was then analyzed using the SPSS statistical software. Results The results of this study indicated that 63.86% of the subjects had refractive errors. Refractive errors were significantly different in children of different genders (P < 0.05. The most common complaints associated with the continuous use of computers were eyestrain, eye pain, eye redness, headache, and blurred vision. The most prevalent (81.8% eye-related problem in computer users was eyestrain and the least prevalent was dry eyes (7.84%. In order to reduce computer related problems 54.2% of the participants suggested taking enough rest, 37.9% recommended use of computers only for necessary tasks, while 24.4% and 19.1% suggested the use of monitor shields and proper working distance, respectively. Conclusions Our findings revealed that using computers for prolonged periods of time can lead to eye

  19. Laser Vision-Based Plant Geometries Computation in Greenhouses

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2014-04-01

    Full Text Available Plant growth statuses are important parameters in the greenhouse environment control system. It is time-consumed and less accuracy that measuring the plant geometries manually in greenhouses. To find a portable method to measure the growth parameters of plants portably and automatically, a laser vision-based measurement system was developed in this paper, consisting of a camera and a laser sheet that scanned the plant vertically. All equipments were mounted on a metal shelf in size of 30cm*40cm*100cm. The 3D point cloud was obtained with the laser sheet scanning the plant vertically, while the camera videoing the laser lines which projected on the plant. The calibration was conducted by a two solid boards standing together in an angle of 90. The camera’s internal and external parameters were calibrated by Image toolbox in MatLab®. It is useful to take a reference image without laser light and to use difference images to obtain the laser line. Laser line centers were extracted by improved centroid method. Thus, we obtained the 3D point cloud structure of the sample plant. For leaf length measurement, iteration method for point clouds was used to extract the axis of the leaf point cloud set. Start point was selected at the end of the leaf point cloud set as the first point of the leaf axis. The points in a radian of certain distance around the start point were chosen as the subset. The centroid of the subset of points was calculated and taken as the next axis point. Iteration was continued until all points in the leaf point cloud set were selected. Leaf length was calculated by curve fitting on these axis points. In order to increase the accuracy of curve fitting, bi-directional start point selection was useful. For leaf area estimation, exponential regression model was used to describe the grown leaves for sampled plant (water spinach in this paper. To evaluate the method in a sample of 18 water spinaches, planted in the greenhouse (length 16

  20. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    Science.gov (United States)

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  1. Computation of Internal Fluid Flows in Channels Using the CFD Software Tool FlowVision

    CERN Document Server

    Kochevsky, A N

    2004-01-01

    The article describes the CFD software tool FlowVision (OOO "Tesis", Moscow). The model equations used for this research are the set of Reynolds and continuity equations and equations of the standard k - e turbulence model. The aim of the paper was testing of FlowVision by comparing the computational results for a number of simple internal channel fluid flows with known experimental data. The test cases are non-swirling and swirling flows in pipes and diffusers, flows in stationary and rotating bends. Satisfactory correspondence of results was obtained both for flow patterns and respective quantitative values.

  2. Indoor scene classification of robot vision based on cloud computing

    Science.gov (United States)

    Hu, Tao; Qi, Yuxiao; Li, Shipeng

    2016-07-01

    For intelligent service robots, indoor scene classification is an important issue. To overcome the weak real-time performance of conventional algorithms, a new method based on Cloud computing is proposed for global image features in indoor scene classification. With MapReduce method, global PHOG feature of indoor scene image is extracted in parallel. And, feature eigenvector is used to train the decision classifier through SVM concurrently. Then, the indoor scene is validly classified by decision classifier. To verify the algorithm performance, we carried out an experiment with 350 typical indoor scene images from MIT LabelMe image library. Experimental results show that the proposed algorithm can attain better real-time performance. Generally, it is 1.4 2.1 times faster than traditional classification methods which rely on single computation, while keeping stable classification correct rate as 70%.

  3. Computational Biology and the Limits of Shared Vision

    DEFF Research Database (Denmark)

    Carusi, Annamaria

    2011-01-01

    of cases is necessary in order to gain a better perspective on social sharing of practices, and on what other factors this sharing is dependent upon. The article presents the case of currently emerging inter-disciplinary visual practices in the domain of computational biology, where the sharing of visual...... practices would be beneficial to the collaborations necessary for the research. Computational biology includes sub-domains where visual practices are coming to be shared across disciplines, and those where this is not occurring, and where the practices of others are resisted. A significant point......, its domain of study. Social practices alone are not sufficient to account for the shaping of evidence. The philosophy of Merleau-Ponty is introduced as providing an alternative framework for thinking of the complex inter-relations between all of these factors. This [End Page 300] philosophy enables us...

  4. Computer Vision Research and Its Applications to Automated Cartography.

    Science.gov (United States)

    1983-07-27

    3. Ikeuchi, K. and Horn, B.K.P., Numerical shape from shading and occluding boundaries, Artificial Inteligence 17 (1981) 141-184. 4. Horn, B.K.P...Fischler, Program Director Principal Investigator, (415)859-5106 Artificial Intelligence Center Computer Science and Technology Division Prepared for...Defense Advanced Research Projects Agency 1400 Wilson Boulevard Arlington, Virginia 22209 Attention: Cdr. Ronald Ohlander, Program Manager Information

  5. Effect of Colored Overlays on Computer Vision Syndrome (CVS

    Directory of Open Access Journals (Sweden)

    Mark Rosenfield, MCOptom, PhD

    2015-06-01

    Full Text Available Background: Colored overlays may produce an improvement in reading when superimposed over printed materials. This study determined whether improvements in reading occur when the overlays are placed over a computer monitor. Methods: Subjects (N=30 read from a computer screen for 10 minutes with either a Cerium or control overlay positioned on the monitor. In a third condition, no overlay was present. Immediately following each trial, subjects reported ocular and visual symptoms experienced during the task. Results: Mean symptom scores following the Cerium, control, and no overlay conditions were 12.83, 17.37, and 15.65, respectively (p=0.47. However, a subgroup of 7 subjects (23% reported significant improvements with the Cerium overlay. The mean symptom scores for the Cerium, control, and no overlay trials for this subgroup were 12.14, 29.86, and 28.93, respectively (p=0.03. No significant improvements in either reading speed or reading errors were observed in this subgroup. Conclusion: The use of colored overlays may provide a treatment method for some subjects reporting symptoms during computer use.

  6. TO STUDY THE ROLE OF ERGONOMICS IN THE MANAGEMENT OF COMPUTER VISION SYNDROME

    Directory of Open Access Journals (Sweden)

    Anshu

    2016-03-01

    Full Text Available INTRODUCTION Ergonomics is the science of designing the job equipment and workplace to fit the worker by obtaining a correct match between the human body, work related tasks and work tools. By applying the science of ergonomics we can reduce the difficulties faced by computer users. OBJECTIVES To evaluate the efficacy of tear substitutes and the role of ergonomics in the management of Computer Vision Syndrome. Development of counseling plan, initial treatment plan, prevent complications and educate the subjects about the disease process and to enhance public awareness. MATERIALS AND METHODS A minimum of 100 subjects were selected randomly irrespective of gender, place and nature of computer work & ethnic differences. The subjects were between age group of 10-60 years who had been using the computer for a minimum of 2 hours/day for atleast 5-6 days a week. The subjects underwent tests like Schirmer's, Test film breakup time (TBUT, Inter Blink Interval and Ocular surface staining. A Computer Vision score was taken out based on 5 symptoms each of which was given a score of 2. The symptoms included foreign body sensation, redness, eyestrain, blurring of vision and frequent change in refraction. The score of more than 6 was treated as Computer Vision syndrome and the subjects underwent synoptophore tests and refraction. RESULT In the present study where we had divided 100 subjects into 2 groups of 50 each and given tear substitutes only in one group and ergonomics was considered with tear substitutes in the other. We saw that there was more improvement after 4 weeks and 8 weeks in the group taking lubricants and ergonomics into consideration than lubricants alone. More improvement was seen in eyestrain and blurring (P0.05. CONCLUSION Advanced training in proper computer usage can decrease discomfort.

  7. CJS 211 Course Tutorial/TutorialRank

    OpenAIRE

    candice

    2015-01-01

    For more course tutorials visit www.tutorialrank.com Tutorial Purchased: 8 Times, Rating: A+   CJS 211 Week 1 Individual Assignment Ethical Dilemma Paper CJS 211 Week 1 DQ 1 CJS 211 Week 1 DQ 2 CJS 211 Week 2 Individual Assignment Ethical Dilemma Worksheet Law Enforcement CJS 211 Week 2 DQ 1 CJS 211 Week 2 DQ 2 CJS 211 Week 2 Team Assignment Ethical Decision Making Paper CJS 211 Week 3 Individual Assignment Ethical Dilemma Worksheet Prosecutors...

  8. An investigation comparing traditional recitation instruction to computer tutorials which combine three-dimensional animation with varying levels of visual complexity, including digital video in teaching various chemistry topics

    Science.gov (United States)

    Graves, A. Palmer

    This study examines the effect of increasing the visual complexity used in computer assisted instruction in general chemistry. Traditional recitation instruction was used as a control for the experiment. One tutorial presented a chemistry topic using 3-D animation showing molecular activity and symbolic representation of the macroscopic view of a chemical phenomenon. A second tutorial presented the same topic but simultaneously presented students with a digital video movie showing the phenomena and 3-D animation showing the molecular view of the phenomena. This experimental set-up was used in two different experiments during the first semester of college level general chemistry course. The topics covered were the molecular effect of heating water through the solid-liquid phase change and the kinetic molecular theory used in explaining pressure changes. The subjects used in the experiment were 236 college students enrolled in a freshman chemistry course at a large university. The data indicated that the simultaneous presentation of digital video, showing the solid to liquid phase change of water, with a molecular animation, showing the molecular behavior during the phase change, had a significant effect on student particulate understanding when compared to traditional recitation. Although the effect of the KMT tutorial was not statistically significant, there was a positive effect on student particulate understanding. The use of computer tutorial also had a significant effect on student attitude toward their comprehension of the lesson.

  9. Computer vision for real-time orbital operations. Center directors discretionary fund

    Science.gov (United States)

    Vinz, F. L.; Brewster, L. L.; Thomas, L. D.

    1984-01-01

    Machine vision research is examined as it relates to the NASA Space Station program and its associated Orbital Maneuvering Vehicle (OMV). Initial operation of OMV for orbital assembly, docking, and servicing are manually controlled from the ground by means of an on board TV camera. These orbital operations may be accomplished autonomously by machine vision techniques which use the TV camera as a sensing device. Classical machine vision techniques are described. An alternate method is developed and described which employs a syntactic pattern recognition scheme. It has the potential for substantial reduction of computing and data storage requirements in comparison to the Two-Dimensional Fast Fourier Transform (2D FFT) image analysis. The method embodies powerful heuristic pattern recognition capability by identifying image shapes such as elongation, symmetry, number of appendages, and the relative length of appendages.

  10. Factors leading to the computer vision syndrome: an issue at the contemporary workplace.

    Science.gov (United States)

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2007-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  11. Computer vision-based classification of hand grip variations in neurorehabilitation.

    Science.gov (United States)

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  12. A conceptual framework of computations in mid-level vision

    Directory of Open Access Journals (Sweden)

    Jonas eKubilius

    2014-12-01

    Full Text Available If a picture is worth a thousand words, as an English idiom goes, what should those words – or, rather, descriptors – capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii sufficiently robust to apply in practice on realistic images; and (iii able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation and so on, and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization. Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model’s behavior and its limitations.

  13. A conceptual framework of computations in mid-level vision

    Science.gov (United States)

    Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P.

    2014-01-01

    If a picture is worth a thousand words, as an English idiom goes, what should those words—or, rather, descriptors—capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations. PMID:25566044

  14. Cone beam computed tomography: A new vision in dentistry

    Directory of Open Access Journals (Sweden)

    Manas Gupta

    2015-01-01

    Full Text Available Cone beam computed tomography (CBCT is a developing imaging technique designed to provide relatively low-dose high-spatial-resolution visualization of high-contrast structures in the head and neck and other anatomic areas. It is a vital content of a dental patient's record. A literature review demonstrated that CBCT has been utilized for oral diagnosis, oral and maxillofacial surgery, endodontics, implantology, orthodontics; temporomandibular joint dysfunction, periodontics, and restorative and forensic dentistry. Recently, higher emphasis has been placed on the CBCT expertise, the three-dimensional (3D images, and virtual models. This literature review showed that the different indications for CBCT are governed by the needs of the specific dental discipline and the type of procedure performed.

  15. Computer Vision Syndrome and Associated Factors Among Medical and Engineering Students in Chennai

    Science.gov (United States)

    Logaraj, M; Madhupriya, V; Hegde, SK

    2014-01-01

    Background: Almost all institutions, colleges, universities and homes today were using computer regularly. Very little research has been carried out on Indian users especially among college students the effects of computer use on the eye and vision related problems. Aim: The aim of this study was to assess the prevalence of computer vision syndrome (CVS) among medical and engineering students and the factors associated with the same. Subjects and Methods: A cross-sectional study was conducted among medical and engineering college students of a University situated in the suburban area of Chennai. Students who used computer in the month preceding the date of study were included in the study. The participants were surveyed using pre-tested structured questionnaire. Results: Among engineering students, the prevalence of CVS was found to be 81.9% (176/215) while among medical students; it was found to be 78.6% (158/201). A significantly higher proportion of engineering students 40.9% (88/215) used computers for 4-6 h/day as compared to medical students 10% (20/201) (P computer for 4-6 h were at significantly higher risk of developing redness (OR = 1.2, 95% CI = 1.0-3.1,P = 0.04), burning sensation (OR = 2.1,95% CI = 1.3-3.1, P computer for less than 4 h. Significant correlation was found between increased hours of computer use and the symptoms redness, burning sensation, blurred vision and dry eyes. Conclusion: The present study revealed that more than three-fourth of the students complained of any one of the symptoms of CVS while working on the computer. PMID:24761234

  16. Fuzzy Control Tutorial

    DEFF Research Database (Denmark)

    Dotoli, M.; Jantzen, Jan

    1999-01-01

    The tutorial concerns automatic control of an inverted pendulum, especially rule based control by means of fuzzy logic. A ball balancer, implemented in a software simulator in Matlab, is used as a practical case study. The objectives of the tutorial are to teach the basics of fuzzy control......, and to show how to apply fuzzy logic in automatic control. The tutorial is distance learning, where students interact one-to-one with the teacher using e-mail....

  17. Fuzzy Control Tutorial

    DEFF Research Database (Denmark)

    Dotoli, M.; Jantzen, Jan

    1999-01-01

    The tutorial concerns automatic control of an inverted pendulum, especially rule based control by means of fuzzy logic. A ball balancer, implemented in a software simulator in Matlab, is used as a practical case study. The objectives of the tutorial are to teach the basics of fuzzy control, and t......, and to show how to apply fuzzy logic in automatic control. The tutorial is distance learning, where students interact one-to-one with the teacher using e-mail....

  18. A Tutorial on Variational Integrators

    CERN Document Server

    Webb, Stephen D

    2014-01-01

    We present a brief tutorial on the nuts and bolts computation of a multisymplectic particle-in-cell algorithm using the discretized Lagrangian approach. This approach, originated by Marsden, Shadwick, and others, brings the benefits of symplectic integration of Hamiltonian systems to full electromagnetic particle-in-cell algorithms. To make the work more approachable, we present a basic discussion of the philosophy, combined with a detailed derivation of a standard 1-dimensional electrostatic particle-in-cell algorithm.

  19. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    Science.gov (United States)

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  20. Application of computer vision in studying fire plume behavior of tilting flames

    Science.gov (United States)

    Aminfar, Amirhessam; Cobian Iñiguez, Jeanette; Pham, Stephanie; Chong, Joey; Burke, Gloria; Weise, David; Princevac, Marko

    2016-11-01

    With the development in computer sciences especially in the field of computer vision, image processing has become an inevitable part of flow visualization. Computer vision can be used to visualize flow structure and to quantify its properties. We used a computer vision algorithm to study fire plume tilting when the fire is interacting with a solid wall. As the fire propagates to the wall the amount of air available for the fire to consume will decrease on the wall side. Therefore, the fire will start tilting towards the wall. Aspen wood was used for the fuel source and various configurations of the fuel were investigated. The plume behavior was captured using a digital camera. In the post processing, the flames were isolated from the image by using edge detection technics, making it possible to develop an algorithm to calculate flame height and flame orientation. Moreover, by using an optical flow algorithm we were able to calculate the speed associated with the edges of the flame which is related to the flame propagation speed and effective vertical velocity of the flame. The results demonstrated that as the size of the flame was increasing, the flames started tilting towards the wall. Leading to the conclusion that there should be a critical area of fire in which the flames start to tilt. Also, the algorithm made it possible to calculate a critical distance in which the flame will start orienting towards the wall

  1. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  2. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  3. Analysis of the Indented Cylinder by the use of Computer Vision

    DEFF Research Database (Denmark)

    Buus, Ole Thomsen

    The research summarised in this PhD thesis took advantage of methods from computer vision to experimentally analyse the sorting/separation ability of a specific type of seed sorting device – known as an “indented cylinder”. The indented cylinder basically separates incoming seeds into two sub......-groups: (1) “long” seeds and (2) “short” seeds (known as length-separation). The motion of seeds being physically manipulated inside an active indented cylinder was analysed using various computer vision methods. The data from such analyses were used to create an overview of the machine’s ability to separate...... certain species of seed from each other. Seeds are processed in order to achieve a high-quality end product: a batch of a single species of crop seed. Naturally, farmers need processed clean crop seeds that are free from non-seed impurities, weed seeds, and non-viable or dead crop seeds. Since...

  4. Computer Vision Based Methods for Detection and Measurement of Psychophysiological Indicators

    DEFF Research Database (Denmark)

    Irani, Ramin

    2017-01-01

    expressions show that present facial expression recognition systems are not reliable for recognizing patients’ emotional states especially when they have difficulties with controlling their facial muscles. Regarding future research, the authors believe that the approaches proposed in this thesis may......Recently, computer vision technologies have been used for analysis of human facial video in order to provide a remotely indicator of some crucial psychophysiological parameters such as fatigue, pain, stress and hearthbeat rate. Available contact-based technologies are inconvenient for monitoring...... patients’ physiological signals due to irritating skin and require huge amount of wires to collect and transmitting the signals. While contact-free computer vision techniques not only can be an easy and economical way to overcome this issue, they provide an automatic recognition of the patients’ emotions...

  5. Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision

    Directory of Open Access Journals (Sweden)

    Xiang Wu

    2013-01-01

    Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.

  6. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  7. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2016-12-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  8. Computer vision syndrome: A study of the knowledge, attitudes and practices in Indian Ophthalmologists

    Directory of Open Access Journals (Sweden)

    Bali Jatinder

    2007-01-01

    Full Text Available Purpose: To study the knowledge, attitude and practices (KAP towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether ′computer use by practitioners′ had any bearing on the knowledge and practices in computer vision syndrome (CVS. Materials and Methods: A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. Results: All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%, headache (82.1%, tiredness and burning sensation (79.1%, watering (66.4% and redness (61.2%. Ophthalmologists using computers reported that focusing from distance to near and vice versa ( P =0.006, χ2 test, blurred vision at a distance ( P =0.016, χ2 test and blepharospasm ( P =0.026, χ2 test formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7% were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7% or spectral filters. Computer-users were more likely to prescribe sedatives/ anxiolytics ( P = 0.04, χ2 test, spectacles ( P = 0.02, χ2 test and conscious frequent blinking ( P = 0.003, χ2 test than the non-computer-users. Conclusions: All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  9. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  10. Comparison of Computer Vision and Photogrammetric Approaches for Epipolar Resampling of Image Sequence.

    Science.gov (United States)

    Kim, Jae-In; Kim, Taejung

    2016-03-22

    Epipolar resampling is the procedure of eliminating vertical disparity between stereo images. Due to its importance, many methods have been developed in the computer vision and photogrammetry field. However, we argue that epipolar resampling of image sequences, instead of a single pair, has not been studied thoroughly. In this paper, we compare epipolar resampling methods developed in both fields for handling image sequences. Firstly we briefly review the uncalibrated and calibrated epipolar resampling methods developed in computer vision and photogrammetric epipolar resampling methods. While it is well known that epipolar resampling methods developed in computer vision and in photogrammetry are mathematically identical, we also point out differences in parameter estimation between them. Secondly, we tested representative resampling methods in both fields and performed an analysis. We showed that for epipolar resampling of a single image pair all uncalibrated and photogrammetric methods tested could be used. More importantly, we also showed that, for image sequences, all methods tested, except the photogrammetric Bayesian method, showed significant variations in epipolar resampling performance. Our results indicate that the Bayesian method is favorable for epipolar resampling of image sequences.

  11. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  12. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  13. Impact of a computer-based auto-tutorial program on parasitology test scores of four consecutive classes of veterinary medical students.

    Science.gov (United States)

    Pinckney, R D; Mealy, M J; Thomas, C B; MacWilliams, P S

    2001-01-01

    A "Hard and Soft Tick" auto-tutorial that integrates basic knowledge of the parasite biology with practical aspects of tick identification, clinical presentation, pathology, disease transmission, treatment, and control was developed at the University of Wisconsin-Madison School of Veterinary Medicine. The purpose of this study was to assess impact of the auto-tutorial on parasitology test scores in four classes (1999, 2000, 2001, and 2002) of veterinary students. The analysis revealed a small but significant increase (p = 0.054) in mean percentage examination scores for students who used the tutorial over those who did not.

  14. Generating Consistent Program Tutorials

    DEFF Research Database (Denmark)

    Vestdam, Thomas

    2002-01-01

    In this paper we present a tool that supports construction of program tutorials. A program tutorial provides the reader with an understanding of an example program by interleaving fragments of source code and explaining text. An example program can for example illustrate how to use a library...

  15. EFFECTIVE ELECTRONIC TUTORIAL

    Directory of Open Access Journals (Sweden)

    Andrei A. Fedoseev

    2014-01-01

    Full Text Available The article analyzes effective electronic tutorials creation and application based on the theory of pedagogy. Herewith the issues of necessary electronic tutorial functional, ways of the educational process organization with the use of information and communication technologies and the logistics of electronic educational resources are touched upon. 

  16. 2014 CESM Tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Marika

    2014-08-11

    The 2014 annual tutorial for the Community Earth System Model (CESM) was held on August 11-August 15, 2014 at the National Center for Atmospheric Research in Boulder, CO. It included lectures and practical sessions on numerous aspects of the CESM model. The proceedings submitted here include a description of the tutorial.

  17. IL web tutorials

    DEFF Research Database (Denmark)

    Hyldegård, Jette; Lund, Haakon

    2012-01-01

    The paper presents the results from a study on information literacy in a higher education (HE) context based on a larger research project evaluating 3 Norwegian IL web tutorials at 6 universities and colleges in Norway. The aim was to evaluate how the 3 web tutorials served students’ information...... seeking and writing process in an study context and to identify barriers to the employment and use of the IL web tutorials, hence to the underlying information literacy intentions by the developer. Both qualitative and quantitative methods were employed. A clear mismatch was found between intention...... and use of the web tutorials. In addition, usability only played a minor role compared to relevance. It is concluded that the positive expectations of the IL web tutorials tend to be overrated by the developers. Suggestions for further research are presented....

  18. Practice of Ergonomic Principles and Computer Vision Syndrome (CVS among Undergraduates Students in Chennai

    Directory of Open Access Journals (Sweden)

    Muthunarayanan Logaraj

    2013-04-01

    Full Text Available ABSTRACT Background: With increasing use of computers by young adults in educational institutions as well as at home there is a need to investigate whether students are adopting ergonomic principles when using computers. Objective: To assess the practice of students on ergonomic principles while working on computers and their association with the symptoms of Computer Vision Syndrome (CVS. Methodology: A cross-sectional study was conducted among the undergraduate students using pre-tested structured questionnaire on the demographic profile, practice of ergonomic principles and symptoms of CVS experienced while on continuous computer work within the past one month duration. Results: Out of 416 students studied, 50% of them viewed computer at a distance of 20 to 28 inches, 61 % viewed the computer screen at the same level, 42.8% placed the reference material between monitor and key board, 24.5% tilted screen backward, 75.7% took frequent breaks and 56.0% blinked frequently to prevent CVS. Students who viewed the computer at a distance of less than 20 inches, viewed upwards or downwards to see the computer, who did not avoid glare and did not took frequent breaks were at higher risk of developing CVS. Students who did not used adjustable chair, height adjustable keyboard and anti-glare screen were at higher risk of developing CVS. Conclusion: The students who were not practicing ergonomics principle and did not check posture and make ergonomic alteration were at higher risk of developing CVS. Keywords: Ergonomic principles, computer vision syndrome, undergraduate students. [Natl J Med Res 2013; 3(2.000: 111-116

  19. Model of Quantum Computing in the Cloud: The Relativistic Vision Applied in Corporate Networks

    Directory of Open Access Journals (Sweden)

    Chau Sen Shia

    2016-08-01

    Full Text Available Cloud computing has is one of the subjects of interest to information technology professionals and to organizations when the subject covers financial economics and return on investment for companies. This work aims to present as a contribution proposing a model of quantum computing in the cloud using the relativistic physics concepts and foundations of quantum mechanics to propose a new vision in the use of virtualization environment in corporate networks. The model was based on simulation and testing of connection with providers in virtualization environments with Datacenters and implementing the basics of relativity and quantum mechanics in communication with networks of companies, to establish alliances and resource sharing between the organizations. The data were collected and then were performed calculations that demonstrate and identify connections and integrations that establish relations of cloud computing with the relativistic vision, in such a way that complement the approaches of physics and computing with the theories of the magnetic field and the propagation of light. The research is characterized as exploratory, because searches check physical connections with cloud computing, the network of companies and the adhesion of the proposed model. Were presented the relationship between the proposal and the practical application that makes it possible to describe the results of the main features, demonstrating the relativistic model integration with new technologies of virtualization of Datacenters, and optimize the resource with the propagation of light, electromagnetic waves, simultaneity, length contraction and time dilation.

  20. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Kevin L., E-mail: kevinmoore@ucsd.edu; Moiseenko, Vitali [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States); Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); McNutt, Todd R. [Department of Radiation Oncology and Molecular Radiation Science, School of Medicine, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Mutic, Sasa [Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri 63110 (United States)

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  1. The use of computer vision techniques to augment home based sensorised environments.

    Science.gov (United States)

    Uhríková, Zdenka; Nugent, Chris D; Hlavác, Václav

    2008-01-01

    Technology within the home environment is becoming widely accepted as a means to facilitate independent living. Nevertheless, practical issues of detecting different tasks between multiple persons within the same environment along with managing instances of uncertainty associated with recorded sensor data are two key challenges yet to be fully solved. This work presents details of how computer vision techniques can be used as both alternative and complementary means in the assessment of behaviour in home based sensorised environments. Within our work we assessed the ability of vision processing techniques in conjunction with sensor based data to deal with instances of multiple occupancy. Our Results indicate that the inclusion of the video data improved the overall process of task identification by detecting and recognizing multiple people in the environment using color based tracking algorithm.

  2. Vision correction for computer users based on image pre-compensation with changing pupil size.

    Science.gov (United States)

    Huang, Jian; Barreto, Armando; Alonso, Miguel; Adjouadi, Malek

    2011-01-01

    Many computer users suffer varying degrees of visual impairment, which hinder their interaction with computers. In contrast with available methods of vision correction (spectacles, contact lenses, LASIK, etc.), this paper proposes a vision correction method for computer users based on image pre-compensation. The blurring caused by visual aberration is counteracted through the pre-compensation performed on images displayed on the computer screen. The pre-compensation model used is based on the visual aberration of the user's eye, which can be measured by a wavefront analyzer. However, the aberration measured is associated with one specific pupil size. If the pupil has a different size during viewing of the pre-compensated images, the pre-compensation model should also be modified to sustain appropriate performance. In order to solve this problem, an adjustment of the wavefront function used for pre-compensation is implemented to match the viewing pupil size. The efficiency of these adjustments is evaluated with an "artificial eye" (high resolution camera). Results indicate that the adjustment used is successful and significantly improves the images perceived and recorded by the artificial eye.

  3. Clinical efficacy of Ayurvedic management in computer vision syndrome: A pilot study.

    Science.gov (United States)

    Dhiman, Kartar Singh; Ahuja, Deepak Kumar; Sharma, Sanjeev Kumar

    2012-07-01

    Improper use of sense organs, violating the moral code of conduct, and the effect of the time are the three basic causative factors behind all the health problems. Computer, the knowledge bank of modern life, has emerged as a profession causing vision-related discomfort, ocular fatigue, and systemic effects. Computer Vision Syndrome (CVS) is the new nomenclature to the visual, ocular, and systemic symptoms arising due to the long time and improper working on the computer and is emerging as a pandemic in the 21(st) century. On critical analysis of the symptoms of CVS on Tridoshika theory of Ayurveda, as per the road map given by Acharya Charaka, it seems to be a Vata-Pittaja ocular cum systemic disease which needs systemic as well as topical treatment approach. Shatavaryaadi Churna (orally), Go-Ghrita Netra Tarpana (topically), and counseling regarding proper working conditions on computer were tried in 30 patients of CVS. In group I, where oral and local treatment was given, significant improvement in all the symptoms of CVS was observed, whereas in groups II and III, local treatment and counseling regarding proper working conditions, respectively, were given and showed insignificant results. The study verified the hypothesis that CVS in Ayurvedic perspective is a Vata-Pittaja disease affecting mainly eyes and body as a whole and needs a systemic intervention rather than topical ocular medication only.

  4. IL web tutorials

    DEFF Research Database (Denmark)

    Hyldegård, Jette; Lund, Haakon

    2012-01-01

    The paper presents the results from a study on information literacy in a higher education (HE) context based on a larger research project evaluating 3 Norwegian IL web tutorials at 6 universities and colleges in Norway. The aim was to evaluate how the 3 web tutorials served students’ information...... seeking and writing process in an study context and to identify barriers to the employment and use of the IL web tutorials, hence to the underlying information literacy intentions by the developer. Both qualitative and quantitative methods were employed. A clear mismatch was found between intention...

  5. Computer use and vision-related problems among university students in ajman, United arab emirate.

    Science.gov (United States)

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  6. An Investigation of the Potential for a Computer-based Tutorial Program Covering the Cardiovascular System to Replace Traditional Lectures.

    Science.gov (United States)

    Dewhurst, D. G.; Williams, A. D.

    1998-01-01

    Presents the results of a comparative study to evaluate the effectiveness of two interactive computer-based learning (CBL) programs, covering the cardiovascular system, as an alternative to lectures for first year undergraduate students at a United Kingdom University. Discusses results in relation to the design of evaluative studies and the future…

  7. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  8. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    Science.gov (United States)

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  9. Computer vision-based limestone rock-type classification using probabilistic neural network

    Institute of Scientific and Technical Information of China (English)

    Ashok Kumar Patel; Snehamoy Chatterjee

    2016-01-01

    Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN) where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classifica-tion algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  10. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  11. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  12. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  13. Adult nutrition assessment tutorial

    Science.gov (United States)

    This tutorial presents a systematic approach to nutrition assessment based on a modern appreciation for the contributions of inflammation that serve as the foundation for newly proposed consensus definitions for malnutrition syndromes. Practical indicators of malnutrition and inflammation have been ...

  14. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Directory of Open Access Journals (Sweden)

    Anyela Camargo

    Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  15. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Science.gov (United States)

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  16. The Computer-Vision Symptom Scale (CVSS17): development and initial validation.

    Science.gov (United States)

    González-Pérez, Mariano; Susi, Rosario; Antona, Beatriz; Barrio, Ana; González, Enrique

    2014-06-17

    To develop a questionnaire (in Spanish) to measure computer-related visual and ocular symptoms (CRVOS). A pilot questionnaire was created by consulting the literature, clinicians, and video display terminal (VDT) workers. The replies of 636 subjects completing the questionnaire were assessed using the Rasch model and conventional statistics to generate a new scale, designated the Computer-Vision Symptom Scale (CVSS17). Validity and reliability were determined by Rasch fit statistics, principal components analysis (PCA), person separation, differential item functioning (DIF), and item-person targeting. To assess construct validity, the CVSS17 was correlated with a Rasch-based visual discomfort scale (VDS) in 163 VDT workers, this group completed the CVSS17 twice in order to assess test-retest reliability (two-way single-measure intraclass correlation coefficient [ICC] and their 95% confidence intervals, and the coefficient of repeatability [COR]). The CVSS17 contains 17 items exploring 15 different symptoms. These items showed good reliability and internal consistency (mean square infit and outfit 0.88-1.17, eigenvalue for the first residual PCA component 1.37, person separation 2.85, and no DIF). Pearson's correlation with VDS scores was 0.60 (P computer workers. : Spanish Abstract. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  17. Towards Domain Ontology Creation Based on a Taxonomy Structure in Computer Vision

    Directory of Open Access Journals (Sweden)

    Sadgal mohamed

    2016-02-01

    Full Text Available In computer vision to create a knowledge base usable by information systems, we need a data structure facilitating the information access. Artificial intelligence community uses the ontologies to structure and represent the domain knowledge. This information structure can be used as a database of many geographic information systems (GIS or information systems treating real objects for example road scenes, besides it can be utilized by other systems. For this, we provide a process to create a taxonomy structure based on new hierarchical image clustering method. The hierarchical relation is based on visual object features and contributes to build domain ontology.

  18. Tensor Voting A Perceptual Organization Approach to Computer Vision and Machine Learning

    CERN Document Server

    Mordohai, Philippos

    2006-01-01

    This lecture presents research on a general framework for perceptual organization that was conducted mainly at the Institute for Robotics and Intelligent Systems of the University of Southern California. It is not written as a historical recount of the work, since the sequence of the presentation is not in chronological order. It aims at presenting an approach to a wide range of problems in computer vision and machine learning that is data-driven, local and requires a minimal number of assumptions. The tensor voting framework combines these properties and provides a unified perceptual organiza

  19. Computer vision for robots; Proceedings of the Meeting, Cannes, France, December 2-6, 1985

    Science.gov (United States)

    Faugeras, O. D. (Editor); Kelley, R. B. (Editor)

    1986-01-01

    The conference presents papers on segmentation techniques, three-dimensional recognition and representation, processing image sequences, and navigation and mobility. Particular attention is given to determining the pose of an object, adaptive least squares correlation with geometrical constraints, and the reliable formation of feature vectors for two-dimensional shape representation. Other topics include the real-time tracking of a target moving on a natural textured background, computer vision for the guidance of roving robots, and integrating sensory data for object recognition tasks.

  20. Computer vision for robots; Proceedings of the Meeting, Cannes, France, December 2-6, 1985

    Science.gov (United States)

    Faugeras, O. D. (Editor); Kelley, R. B. (Editor)

    1986-01-01

    The conference presents papers on segmentation techniques, three-dimensional recognition and representation, processing image sequences, and navigation and mobility. Particular attention is given to determining the pose of an object, adaptive least squares correlation with geometrical constraints, and the reliable formation of feature vectors for two-dimensional shape representation. Other topics include the real-time tracking of a target moving on a natural textured background, computer vision for the guidance of roving robots, and integrating sensory data for object recognition tasks.

  1. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  2. The computer vision in the service of safety and reliability in steam generators inspection services; La vision computacional al servicio de la seguridad y fiabilidad en los servicios de inspeccion en generadores de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Pineiro Fernandez, P.; Garcia Bueno, A.; Cabrera Jordan, E.

    2012-07-01

    The actual computational vision has matured very quickly in the last ten years by facilitating new developments in various areas of nuclear application allowing to automate and simplify processes and tasks, instead or in collaboration with the people and equipment efficiently. The current computer vision (more appropriate than the artificial vision concept) provides great possibilities of also improving in terms of the reliability and safety of NPPS inspection systems.

  3. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    CERN Document Server

    Fedkiw, R P

    2003-01-01

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision.

  4. Computer Vision Utilization for Detection of Green House Tomato under Natural Illumination

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2013-02-01

    Full Text Available Agricultural sector experiences the application of automated systems since two decades ago. These systems are applied to harvest fruits in agriculture. Computer vision is one of the technologies that are most widely used in food industries and agriculture. In this paper, an automated system based on computer vision for harvesting greenhouse tomatoes is presented. A CCD camera takes images from workspace and tomatoes with over 50 percent ripeness are detected through an image processing algorithm. In this research three color spaces including RGB, HSI and YCbCr and three algorithms including threshold recognition, curvature of the image and red/green ratio were used in order to identify the ripe tomatoes from background under natural illumination. The average error of threshold recognition, red/green ratio and curvature of the image algorithms were 11.82%, 10.03% and 7.95% in HSI, RGB and YCbCr color spaces, respectively. Therefore, the YCbCr color space and curvature of the image algorithm were identified as the most suitable for recognizing fruits under natural illumination condition.

  5. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    Science.gov (United States)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  6. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  7. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10(-6) by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. Selective cultivation and rapid detection of Staphylococcus aureus by computer vision.

    Science.gov (United States)

    Wang, Yong; Yin, Yongguang; Zhang, Chaonan

    2014-03-01

    In this paper, we developed a selective growth medium and a more rapid detection method based on computer vision for selective isolation and identification of Staphylococcus aureus from foods. The selective medium consisted of tryptic soy broth basal medium, 3 inhibitors (NaCl, K2 TeO3 , and phenethyl alcohol), and 2 accelerators (sodium pyruvate and glycine). After 4 h of selective cultivation, bacterial detection was accomplished using computer vision. The total analysis time was 5 h. Compared to the Baird-Parker plate count method, which requires 4 to 5 d, this new detection method offers great time savings. Moreover, our novel method had a correlation coefficient of greater than 0.998 when compared with the Baird-Parker plate count method. The detection range for S. aureus was 10 to 10(7) CFU/mL. Our new, rapid detection method for microorganisms in foods has great potential for routine food safety control and microbiological detection applications. © 2014 Institute of Food Technologists®

  9. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  10. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  11. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  12. Screencast Tutorials Enhance Student Learning of Statistics

    Science.gov (United States)

    Lloyd, Steven A.; Robertson, Chuck L.

    2012-01-01

    Although the use of computer-assisted instruction has rapidly increased, there is little empirical research evaluating these technologies, specifically within the context of teaching statistics. The authors assessed the effect of screencast tutorials on learning outcomes, including statistical knowledge, application, and interpretation. Students…

  13. A Shape Dynamics Tutorial

    CERN Document Server

    Mercati, Flavio

    2014-01-01

    Shape Dynamics (SD) is a new theory of gravity that is based on fewer and more fundamental first principles than General Relativity (GR). The most important feature of SD is the replacement of GR's relativity of simultaneity with a more tractable gauge symmetry, namely invariance under spatial conformal transformations. This Tutorial contains both a quick introduction for readers curious about SD and a detailed walk-through of the historical and conceptual motivations for the theory, its logical development from first principles and an in-depth description of its present status. The Tutorial is sufficiently self-contained for an undergrad student with some basic background in General Relativity and Lagrangian/Hamiltonian mechanics. It is intended both as a reference text for students approaching the subject, and as a review article for researchers interested in the theory. This is a first version of the Tutorial, which will be periodically expanded and updated with the latest results.

  14. Die Integration eines computerbasierten Anatomie-Lernprogramms im Curriculum der Ausbildung Medizinisch-technischer Assistenten der Fachrichtung Radiologie [The integration of a computer-based tutorial in anatomy into an educational curriculum for student radiographers/technicians

    Directory of Open Access Journals (Sweden)

    Niewald, Marcus

    2009-11-01

    Full Text Available [english] Purpose: Anatomy is an important subject in the education of radiographers and radiotherapy technicians. The enormous amount of information may render efficient learning more difficult and lead to sub-optimal results. The purpose of this study was to test whether the introduction of a computer-based tutorial enhances learning success in anatomy. Methods: A commercially available tutorial in anatomy especially adapted to the requirements of the education of radiographers and which facilitated and structured the frequent repetition of the material was introduced into the conventional curriculum. The tutorial was used during normal lessons, and work with it was obligatory. The students could learn anatomical structures and landmarks repeatedly as well as test themselves. The scores obtained in the final examinations two years prior to introduction of this tutorial were compared with those obtained two years after introduction. Results: Students’ knowledge in anatomy could be markedly improved. Conclusion: An efficient and time-saving method of learning became possible. It was important to integrate the tutorial into the normal curriculum. The test results show the feasibility of this educational concept. [german] Zielsetzung: In der MTA-Ausbildung nimmt Anatomie einen wichtigen Platz ein. Die große Stofffülle erschwert jedoch ein effizientes zeitsparendes Lernen und führt auch bei motivierten Schülern oft zu mangelndem Lernerfolg. Es sollte geprüft werden, ob dieser durch Einführung computerbasierten Lernens verbessert werden kann. Methodik: Ein auf die Erfordernisse der MTRA-Ausbildung von den Autoren angepasstes kommerzielles Anatomie-Lernprogramm, das den Schülern das notwendige häufige Wiederholen des Materials erleichtert und strukturiert, wurde im Rahmen des Pflichtunterrichtes den Schülern zur Verfügung gestellt. Diese konnten damit anatomische Sachverhalte sich einprägen, beliebig oft wiederholen und ihr Wissen testen

  15. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques.

    Science.gov (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad

    2016-01-01

    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  16. Addendum to Research MMMCV; A Man/Microbio/Megabio/Computer Vision

    CERN Document Server

    Alipour, Philip B

    2007-01-01

    In October 2007, a Research Proposal for the University of Sydney, Australia, the author suggested that biovie-physical phenomenon as `electrodynamic dependant biological vision', is governed by relativistic quantum laws and biovision. The phenomenon on the basis of `biovielectroluminescence', satisfies man/microbio/megabio/computer vision (MMMCV), as a robust candidate for physical and visual sciences. The general aim of this addendum is to present a refined text of Sections 1-3 of that proposal and highlighting the contents of its Appendix in form of a `Mechanisms' Section. We then briefly remind in an article aimed for December 2007, by appending two more equations into Section 3, a theoretical II-time scenario as a time model well-proposed for the phenomenon. The time model within the core of the proposal, plays a significant role in emphasizing the principle points on Objectives no. 1-8, Sub-hypothesis 3.1.2, mentioned in Article [arXiv:0710.0410]. It also expresses the time concept in terms of causing q...

  17. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    Science.gov (United States)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  18. CORROSION DETECTION USING A.I. : A COMPARISON OF STANDARD COMPUTER VISION TECHNIQUES AND DEEP LEARNING MODEL

    Directory of Open Access Journals (Sweden)

    Luca Petricca

    2016-05-01

    Full Text Available In this paper we present a comparison between standard computer vision techniques and Deep Learning approach for automatic metal corrosion (rust detection. For the classic approach, a classification based on the number of pixels containing specific red components has been utilized. The code written in Python used OpenCV libraries to compute and categorize the images. For the Deep Learning approach, we chose Caffe, a powerful framework developed at “Berkeley Vision and Learning Center” (BVLC. The test has been performed by classifying images and calculating the total accuracy for the two different approaches.

  19. Plane geometry drawing tutorial

    Directory of Open Access Journals (Sweden)

    Eduardo Gutiérrez de Ravé

    2014-01-01

    Full Text Available Se ha desarrollado un tutorial para facilitar la docencia del d ibujo geométrico. Con la idea de servir de apoyo a las explicac iones teóricas y prácticas de los conceptos correspondientes a los trazados ge ométricos planos necesarios en la ingeniería. Este tutorial es de fácil manejo y permite interactividad con el usuario, animaciones prá cticas, autoevaluaciones, explicaciones amplias del temario y l a enseñanza "paso a paso" de los conceptos gracias a los diferent es niveles de complejidad conceptual que incluye en su contenido.

  20. A Tutorial on UPPAAL

    DEFF Research Database (Denmark)

    Behrmann, Gerd; David, Alexandre; Larsen, Kim Guldstrand

    2004-01-01

    This is a tutorial paper on the tool Uppaal. Its goal is to be a short introduction on the flavor of timed automata implemented in the tool, to present its interface, and to explain how to use the tool. The contribution of the paper is to provide reference examples and modeling patterns.......This is a tutorial paper on the tool Uppaal. Its goal is to be a short introduction on the flavor of timed automata implemented in the tool, to present its interface, and to explain how to use the tool. The contribution of the paper is to provide reference examples and modeling patterns....

  1. Computer Vision Based Methods for Detection and Measurement of Psychophysiological Indicators

    DEFF Research Database (Denmark)

    Irani, Ramin

    patients’ physiological signals due to irritating skin and require huge amount of wires to collect and transmitting the signals. While contact-free computer vision techniques not only can be an easy and economical way to overcome this issue, they provide an automatic recognition of the patients’ emotions...... like pain and stress. This thesis reports a series of works done on contact-free heartbeat estimation, muscle fatigue detection, pain recognition and stress recognition. In measuring physiological parameters, two parameters are considered among many different physiological parameters: heartbeat rate...... to provide visible heartbeat peaks in the signal. A method for physical fatigue time offset detection from facial video is also introduced. One of the major contributions of the thesis, related to monitoring the patients, is recognizing level of pain and stress. The patients’ pain must be continuously...

  2. Compression Rate Method for Empirical Science and Application to Computer Vision

    CERN Document Server

    Burfoot, Daniel

    2010-01-01

    This philosophical paper proposes a modified version of the scientific method, in which large databases are used instead of experimental observations as the necessary empirical ingredient. This change in the source of the empirical data allows the scientific method to be applied to several aspects of physical reality that previously resisted systematic interrogation. Under the new method, scientific theories are compared by instantiating them as compression programs, and examining the codelengths they achieve on a database of measurements related to a phenomenon of interest. Because of the impossibility of compressing random data, "real world" data can only be compressed by discovering and exploiting the empirical structure it exhibits. The method also provides a new way of thinking about two longstanding issues in the philosophy of science: the problem of induction and the problem of demarcation. The second part of the paper proposes to reformulate computer vision as an empirical science of visual reality, b...

  3. DESIGN OF A NEW TYPE OF AGV BASED ON COMPUTER VISION

    Institute of Scientific and Technical Information of China (English)

    Ji Shouwen; Li Keqiang; Miao Lixin; Wang Rongben; Guo Keyou

    2004-01-01

    The structure, function and working principle of JLUIV-3, which is a new type of automated guided vehicle (AGV) with computer vision, is described.The white stripe line with certain width is used as inductive mark for JLUIV-3 automated navigation.JULIV-3 can automatically recognize the Arabic numeral codes which mark the multi-branch paths and multi-operation buffers, and autonomously select the correct path for destination.Compared with the traditional AGV, it has much more navigation flexibility and less cost, and provides higher-level intelligence.The identification method of navigation path by using neural network and the optimal control method of the AGV are introduced in detail.

  4. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  5. The Event Detection and the Apparent Velocity Estimation Based on Computer Vision

    Science.gov (United States)

    Shimojo, M.

    2012-08-01

    The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.

  6. Universal computer vision system for monitoring the main parameters of wind turbines

    Directory of Open Access Journals (Sweden)

    Korzhavin Sergey

    2016-01-01

    Full Text Available The article presents universal autonomous system of computer vision to monitor the operation of wind turbines. The proposed system allows to estimate the rotational speed and the relative position deviation of the wind turbine. We present a universal method for determining the rotation of wind turbines of various shapes and structures. All obtained data are saved in the database. The presented method was tested at the Territory of Non-traditional Renewable Energy Sources of Ural Federal University Experimental wind turbines is produced by “Scientific and Production Association of automatics named after academician N.A. Semikhatov”. Results show the efficiency of the proposed system and the ability to determine main parameters such as the rotational speed, accuracy and quickness of orientation. The proposed solution is to assume that, in most cases a rotating and central parts of the wind turbine can be allocated different color. The color change of wind blade should not affect the system performance.

  7. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  8. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  9. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; OHARA, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  10. Computer vision syndrome among computer office workers in a developing country: an evaluation of prevalence and risk factors.

    Science.gov (United States)

    Ranasinghe, P; Wathurapatha, W S; Perera, Y S; Lamabadusuriya, D A; Kulatunga, S; Jayawardana, N; Katulanda, P

    2016-03-09

    Computer vision syndrome (CVS) is a group of visual symptoms experienced in relation to the use of computers. Nearly 60 million people suffer from CVS globally, resulting in reduced productivity at work and reduced quality of life of the computer worker. The present study aims to describe the prevalence of CVS and its associated factors among a nationally-representative sample of Sri Lankan computer workers. Two thousand five hundred computer office workers were invited for the study from all nine provinces of Sri Lanka between May and December 2009. A self-administered questionnaire was used to collect socio-demographic data, symptoms of CVS and its associated factors. A binary logistic regression analysis was performed in all patients with 'presence of CVS' as the dichotomous dependent variable and age, gender, duration of occupation, daily computer usage, pre-existing eye disease, not using a visual display terminal (VDT) filter, adjusting brightness of screen, use of contact lenses, angle of gaze and ergonomic practices knowledge as the continuous/dichotomous independent variables. A similar binary logistic regression analysis was performed in all patients with 'severity of CVS' as the dichotomous dependent variable and other continuous/dichotomous independent variables. Sample size was 2210 (response rate-88.4%). Mean age was 30.8 ± 8.1 years and 50.8% of the sample were males. The 1-year prevalence of CVS in the study population was 67.4%. Female gender (OR: 1.28), duration of occupation (OR: 1.07), daily computer usage (1.10), pre-existing eye disease (OR: 4.49), not using a VDT filter (OR: 1.02), use of contact lenses (OR: 3.21) and ergonomics practices knowledge (OR: 1.24) all were associated with significantly presence of CVS. The duration of occupation (OR: 1.04) and presence of pre-existing eye disease (OR: 1.54) were significantly associated with the presence of 'severe CVS'. Sri Lankan computer workers had a high prevalence of CVS. Female gender

  11. Application of Computer Vision for quality control in frozen mixed berries production: colour calibration issues

    Directory of Open Access Journals (Sweden)

    D. Ricauda Aimonino

    2013-09-01

    Full Text Available Computer vision is becoming increasingly important in quality control of many food processes. The appearance properties of food products (colour, texture, shape and size are, in fact, correlated with organoleptic characteristics and/or the presence of defects. Quality control based on image processing eliminates the subjectivity of human visual inspection, allowing rapid and non-destructive analysis. However, most food matrices show a wide variability in appearance features, therefore robust and customized image elaboration algorithms have to be implemented for each specific product. For this reason, quality control by visual inspection is still rather diffused in several food processes. The case study inspiring this paper concerns the production of frozen mixed berries. Once frozen, different kinds of berries are mixed together, in different amounts, according to a recipe. The correct quantity of each kind of fruit, within a certain tolerance, has to be ensured by producers. Quality control relies on bringing few samples for each production lot (samples of the same weight and, manually, counting the amount of each species. This operation is tedious, subject to errors, and time consuming, while a computer vision system (CVS could determine the amount of each kind of berries in a few seconds. This paper discusses the problem of colour calibration of the CVS used for frozen berries mixture evaluation. Images are acquired by a digital camera coupled with a dome lighting system, which gives a homogeneous illumination on the entire visible surface of the berries, and a flat bed scanner. RBG device dependent data are then mapped onto CIELab colorimetric colour space using different transformation operators. The obtained results show that the proposed calibration procedure leads to colour discrepancies comparable or even below the human eyes sensibility.

  12. Computer vision and driver distraction: developing a behaviour-flagging protocol for naturalistic driving data.

    Science.gov (United States)

    Kuo, Jonny; Koppel, Sjaan; Charlton, Judith L; Rudin-Brown, Christina M

    2014-11-01

    Naturalistic driving studies (NDS) allow researchers to discreetly observe everyday, real-world driving to better understand the risk factors that contribute to hazardous situations. In particular, NDS designs provide high ecological validity in the study of driver distraction. With increasing dataset sizes, current best practice of manually reviewing videos to classify the occurrence of driving behaviours, including those that are indicative of distraction, is becoming increasingly impractical. Current statistical solutions underutilise available data and create further epistemic problems. Similarly, technical solutions such as eye-tracking often require dedicated hardware that is not readily accessible or feasible to use. A computer vision solution based on open-source software was developed and tested to improve the accuracy and speed of processing NDS video data for the purpose of quantifying the occurrence of driver distraction. Using classifier cascades, manually-reviewed video data from a previously published NDS was reanalysed and used as a benchmark of current best practice for performance comparison. Two software coding systems were developed - one based on hierarchical clustering (HC), and one based on gender differences (MF). Compared to manual video coding, HC achieved 86 percent concordance, 55 percent reduction in processing time, and classified an additional 69 percent of target behaviour not previously identified through manual review. MF achieved 67 percent concordance, a 75 percent reduction in processing time, and classified an additional 35 percent of target behaviour not identified through manual review. The findings highlight the improvements in processing speed and correctly classifying target behaviours achievable through the use of custom developed computer vision solutions. Suggestions for improved system performance and wider implementation are discussed.

  13. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    Science.gov (United States)

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-03-09

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.

  14. Binocular robot vision emulating disparity computation in the primary visual cortex.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Kushima, Takayuki; Yagi, Tetsuya

    2008-01-01

    We designed a VLSI binocular vision system that emulates the disparity computation in the primary visual cortex (V1). The system consists of two silicon retinas, orientation chips, and field programmable gate array (FPGA), mimicking a hierarchical architecture of visual information processing in the disparity energy model. The silicon retinas emulate a Laplacian-Gaussian-like receptive field of the vertebrate retina. The orientation chips generate an orientation-selective receptive field by aggregating multiple pixels of the silicon retina, mimicking the Hubel-Wiesel-type feed-forward model in order to emulate a Gabor-like receptive field of simple cells. The FPGA receives outputs from the orientation chips corresponding to the left and right eyes and calculates the responses of the complex cells based on the disparity energy model. The system can provide the responses of complex cells tuned to five different disparities and a disparity map obtained by comparing these energy outputs. Owing to the combination of spatial filtering by analog parallel circuits and pixel-wise computation by hard-wired digital circuits, the present system can execute the disparity computation in real time using compact hardware.

  15. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  16. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    Science.gov (United States)

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena

    2015-06-01

    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Tutorial to SARAH

    CERN Document Server

    Staub, Florian

    2016-01-01

    I give in this brief tutorial a short practical introduction to the Mathematica package SARAH. First, it is shown how an existing model file can be changed to implement a new model in SARAH. In the second part, masses, vertices and renormalisation group equations are calculated with SARAH. Finally, the main commands to generate model files and output for other tools are summarised.

  18. Fiber Nonlinearities: A Tutorial

    Institute of Scientific and Technical Information of China (English)

    Govind P. Agrawal

    2003-01-01

    Fiber nonlinearities have long been regarded as being mostly harmful for fiber-optic communication systems. Over the last few years, however, the nonlinear effects are increasingly being used for practical telecommunications applications,the Raman amplification being only one of the recent examples. In this tutorial I review the vario us nonlinear effects occurring in optical fibers from both standpoints..

  19. Fiber Nonlinearities: A Tutorial

    Institute of Scientific and Technical Information of China (English)

    Govind; P.; Agrawal

    2003-01-01

    Fiber nonlinearities have long been regarded as being mostly harmful for fiber-optic communication systems. Over the last few years, however, the nonlinear effects are increasingly being used for practical telecommunications applications, the Raman amplification being only one of the recent examples. In this tutorial I review the various nonlinear effects occurring in optical fibers from both standpoints..

  20. Tutorial on architectural acoustics

    Science.gov (United States)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  1. All 2006 ATLAS Tutorials online

    CERN Multimedia

    Steven Goldfarb,; Mitch McLachlan,; Homer A. Neal

    The University of Michigan has completed its full agenda of Web Lecture recording for ATLAS for 2006. The archives include all three ATLAS Week Plenary Sessions, as well as a large variety of tutorials. They are accessible at target="_top" this location. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. This is the first year our group has been asked to provide this complete service to the collaboration, so any and all feedback is welcome. We would especially like to know if you had any difficulties viewing the lectures, if you found the selection of material to be useful, and/or if you think there are any other specific events we ought to cover in 2007. Please send you comments to wlap@umich.edu. We look forward to bringing you a rich variety of new lectures in 2007, starting with the ATLAS Distributed Computing Tutorial on Feb 1, 2 in Edinburgh and concluding with the Higgs discovery talk (of course). Enjoy the Lec...

  2. The vertical monitor position for presbyopic computer users with progressive lenses: how to reach clear vision and comfortable head posture.

    Science.gov (United States)

    Weidling, Patrick; Jaschinski, Wolfgang

    2015-01-01

    When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.

  3. A Hypertext tutorial for teaching cephalometrics.

    Science.gov (United States)

    Clark, R D; Weekrakone, S; Rock, W P

    1997-11-01

    Hypertext is a non-linear method of text presentation. It necessitates the use of a computer to store data as a series of nodes that can be called up in any desired sequence and, as such, is a new form of discovery-based learning. This paper describes a Hypertext tutorial in cephalometrics and its subsequent testing on first-year clinical dental students. Students were divided into two groups: the first received a conventional lecture; the second used the Hypertext tutorial. Testing was by means of conventional multiple choice questions. The results showed that there was no statistically significant difference between the two groups, although the computer tutor was shown more consistently to improve the knowledge of the students than did the conventional lecture. Most students who used the computer program found it enjoyable, but time consuming; less than half found it easy to follow.

  4. Oral omega-3 fatty acids treatment in computer vision syndrome related dry eye.

    Science.gov (United States)

    Bhargava, Rahul; Kumar, Prachi; Phogat, Hemant; Kaur, Avinash; Kumar, Manjushri

    2015-06-01

    To assess the efficacy of dietary consumption of omega-3 fatty acids (O3FAs) on dry eye symptoms, Schirmer test, tear film break up time (TBUT) and conjunctival impression cytology (CIC) in patients with computer vision syndrome. Interventional, randomized, double blind, multi-centric study. Four hundred and seventy eight symptomatic patients using computers for more than 3h per day for minimum 1 year were randomized into two groups: 220 patients received two capsules of omega-3 fatty acids each containing 180mg eicosapentaenoic acid (EPA) and 120mg docosahexaenoic acid (DHA) daily (O3FA group) and 236 patients received two capsules of a placebo containing olive oil daily for 3 months (placebo group). The primary outcome measure was improvement in dry eye symptoms and secondary outcome measures were improvement in Nelson grade and an increase in Schirmer and TBUT scores at 3 months. In the placebo group, before dietary intervention, the mean symptom score, Schirmer, TBUT and CIC scores were 7.5±2, 19.9±4.7mm, 11.5±2s and 1±0.9 respectively, and 3 months later were 6.8±2.2, 20.5±4.7mm, 12±2.2s and 0.9±0.9 respectively. In the O3FA group, these values were 8.0±2.6, 20.1±4.2mm, 11.7±1.6s and 1.2±0.8 before dietary intervention and 3.9±2.2, 21.4±4mm, 15±1.7s, 0.5±0.6 after 3 months of intervention, respectively. This study demonstrates the beneficial effect of orally administered O3FAs in alleviating dry eye symptoms, decreasing tear evaporation rate and improving Nelson grade in patients suffering from computer vision syndrome related dry eye. Copyright © 2015 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  5. Technology Support for Discussion Based Learning: From Computer Supported Collaborative Learning to the Future of Massive Open Online Courses

    Science.gov (United States)

    Rosé, Carolyn Penstein; Ferschke, Oliver

    2016-01-01

    This article offers a vision for technology supported collaborative and discussion-based learning at scale. It begins with historical work in the area of tutorial dialogue systems. It traces the history of that area of the field of Artificial Intelligence in Education as it has made an impact on the field of Computer-Supported Collaborative…

  6. Technology Support for Discussion Based Learning: From Computer Supported Collaborative Learning to the Future of Massive Open Online Courses

    Science.gov (United States)

    Rosé, Carolyn Penstein; Ferschke, Oliver

    2016-01-01

    This article offers a vision for technology supported collaborative and discussion-based learning at scale. It begins with historical work in the area of tutorial dialogue systems. It traces the history of that area of the field of Artificial Intelligence in Education as it has made an impact on the field of Computer-Supported Collaborative…

  7. On-chip imaging of Schistosoma haematobium eggs in urine for diagnosis by computer vision.

    Directory of Open Access Journals (Sweden)

    Ewert Linder

    Full Text Available BACKGROUND: Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive "mini-microscopes". Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. METHODS/PRINCIPAL FINDINGS: Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. CONCLUSIONS/SIGNIFICANCE: As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases.

  8. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants

    Directory of Open Access Journals (Sweden)

    Pedro J. Navarro

    2016-05-01

    Full Text Available Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN, Naive Bayes Classifier (NBC, and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  9. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  10. RT3D tutorials for GMS users

    Energy Technology Data Exchange (ETDEWEB)

    Clement, T.P. [Pacific Northwest National Lab., Richland, WA (United States); Jones, N.L. [Brigham Young Univ., Provo, UT (United States)

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  11. Television, computer and portable display device use by people with central vision impairment

    Science.gov (United States)

    Woods, Russell L; Satgunam, PremNandhini

    2011-01-01

    Purpose To survey the viewing experience (e.g. hours watched, difficulty) and viewing metrics (e.g. distance viewed, display size) for television (TV), computers and portable visual display devices for normally-sighted (NS) and visually impaired participants. This information may guide visual rehabilitation. Methods Survey was administered either in person or in a telephone interview on 223 participants of whom 104 had low vision (LV, worse than 6/18, age 22 to 90y, 54 males), and 94 were NS (visual acuity 6/9 or better, age 20 to 86y, 50 males). Depending on their situation, NS participants answered up to 38 questions and LV participants answered up to a further 10 questions. Results Many LV participants reported at least “some” difficulty watching TV (71/103), reported at least “often” having difficulty with computer displays (40/76) and extreme difficulty watching videos on handheld devices (11/16). The average daily TV viewing was slightly, but not significantly, higher for the LV participants (3.6h) than the NS (3.0h). Only 18% of LV participants used visual aids (all optical) to watch TV. Most LV participants obtained effective magnification from a reduced viewing distance for both TV and computer display. Younger LV participants also used a larger display when compared to older LV participants to obtain increased magnification. About half of the TV viewing time occurred in the absence of a companion for both the LV and the NS participants. The mean number of TVs at home reported by LV participants (2.2) was slightly but not significantly (p=0.09) higher than NS participants (2.0). LV participants were equally likely to have a computer but were significantly (p=0.004) less likely to access the internet (73/104) compared to NS participants (82/94). Most LV participants expressed an interest in image enhancing technology for TV viewing (67/104) and for computer use (50/74), if they used a computer. Conclusion In this study, both NS and LV participants

  12. Visual Behaviour Based Bio-Inspired Polarization Techniques in Computer Vision and Robotics

    OpenAIRE

    Shabayek, Abd El Rahman; Morel, Olivier; Fofi, David

    2012-01-01

    For long time, it was thought that the sensing of polarization by animals is invariably related to their behavior, such as navigation and orientation. Recently, it was found that polarization can be part of a high-level visual perception, permitting a wide area of vision applications. Polarization vision can be used for most tasks of color vision including object recognition, contrast enhancement, camouflage breaking, and signal detection and discrimination. The polarization based visual beha...

  13. Sieveless particle size distribution analysis of particulate materials through computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Igathinathane, C. [Mississippi State University (MSU); Pordesimo, L. O. [Mississippi State University (MSU); Columbus, Eugene P [ORNL; Batchelor, William D [ORNL; Sokhansanj, Shahabaddine [ORNL

    2009-05-01

    This paper explores the inconsistency of length-based separation by mechanical sieving of particulate materials with standard sieves, which is the standard method of particle size distribution (PSD) analysis. We observed inconsistencies of length-based separation of particles using standard sieves with manual measurements, which showed deviations of 17 22 times. In addition, we have demonstrated the falling through effect of particles cannot be avoided irrespective of the wall thickness of the sieve. We proposed and utilized a computer vision with image processing as an alternative approach; wherein a user-coded Java ImageJ plugin was developed to evaluate PSD based on length of particles. A regular flatbed scanner acquired digital images of particulate material. The plugin determines particles lengths from Feret's diameter and width from pixel-march method, or minor axis, or the minimum dimension of bounding rectangle utilizing the digital images after assessing the particles area and shape (convex or nonconvex). The plugin also included the determination of several significant dimensions and PSD parameters. Test samples utilized were ground biomass obtained from the first thinning and mature stand of southern pine forest residues, oak hard wood, switchgrass, elephant grass, giant miscanthus, wheat straw, as well as Basmati rice. A sieveless PSD analysis method utilized the true separation of all particles into groups based on their distinct length (419 639 particles based on samples studied), with each group truly represented by their exact length. This approach ensured length-based separation without the inconsistencies observed with mechanical sieving. Image based sieve simulation (developed separately) indicated a significant effect (P < 0.05) on number of sieves used in PSD analysis, especially with non-uniform material such as ground biomass, and more than 50 equally spaced sieves were required to match the sieveless all distinct particles PSD analysis

  14. Reinforcement Learning: A Tutorial.

    Science.gov (United States)

    1997-01-01

    The purpose of this tutorial is to provide an introduction to reinforcement learning (RL) at a level easily understood by students and researchers in...provides a simple example to develop intuition of the underlying dynamic programming mechanism. In Section (2) the parts of a reinforcement learning problem... reinforcement learning algorithms. These include TD(lambda) and both the residual and direct forms of value iteration, Q-learning, and advantage learning

  15. Review on Computational Model for Vision%视觉认知计算模型综述

    Institute of Scientific and Technical Information of China (English)

    黄凯奇; 谭铁牛

    2013-01-01

    视觉认知计算模型作为联系视觉认知和信息计算的有效手段,其研究涉及到认知科学、信息科学等多个交叉学科,具有复杂性和多样性等特点。为能更好地把握其发展规律,文中从视觉计算角度系统总结视觉认知计算模型,以其两个主要来源为主线分别从生物视觉机制和视觉计算理论回顾视觉认知计算模型的发展。根据其研究的特点,对视觉认知计算模型的发展做出一定评述,并指出视觉认知计算模型的发展必将对计算视觉理论和生物视觉机制产生深远影响。%The computational models for vision have the characteristics of complex and diversity, as they come from many subjects such as cognition science and information science. In this paper, the computational models for vision are investigated from the biological visual mechanism and computational vision theory systematically. Some points of view about the prospects of the computational model are presented. The development of the computational model will build the bridge for the computational vision and biological visual mechanism.

  16. Interactive learning tutorials on quantum mechanics

    CERN Document Server

    Singh, Chandralekha

    2016-01-01

    We discuss the development and evaluation of quantum interactive learning tutorials (QuILTs) which are suitable for undergraduate courses in quantum mechanics. QuILTs are based on the investigation of student difficulties in learning quantum physics. They exploit computer-based visualization tools and help students build links between the formal and conceptual aspects of quantum physics without compromising the technical content. They can be used both as supplements to lectures or as a self-study tool.

  17. Behavioral response of tilapia (Oreochromis niloticus) to acute ammonia stress monitored by computer vision

    Institute of Scientific and Technical Information of China (English)

    XU Jian-yu; MIAO Xiang-wen; LIU Ying; CUI Shao-rong

    2005-01-01

    The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision system. The swimming activity and geometrical parameters such as location of the gravity center and distribution of the fish school were calculated continuously. These behavioral parameters of tilapia school responded sensitively to moderate and high UIA concentration. Under high UIA concentration the fish activity showed a significant increase (P<0.05), exhibiting an avoidance reaction to high ammonia condition, and then decreased gradually. Under moderate and high UIA concentration the school's vertical location had significantly large fluctuation (P<0.05) with the school moving up to the water surface then down to the bottom of the aquarium alternately and tending to crowd together. After several hours' exposure to high UIA level, the school finally stayed at the aquarium bottom. These observations indicate that alterations in fish behavior under acute stress can provide important information useful in predicting the stress.

  18. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  19. Implementation of Computer Vision Based Industrial Fire Safety Automation by Using Neuro-Fuzzy Algorithms

    Directory of Open Access Journals (Sweden)

    Manjunatha K.C.

    2015-03-01

    Full Text Available A computer vision-based automated fire detection and suppression system for manufacturing industries is presented in this paper. Automated fire suppression system plays a very significant role in Onsite Emergency System (OES as it can prevent accidents and losses to the industry. A rule based generic collective model for fire pixel classification is proposed for a single camera with multiple fire suppression chemical control valves. Neuro-Fuzzy algorithm is used to identify the exact location of fire pixels in the image frame. Again the fuzzy logic is proposed to identify the valve to be controlled based on the area of the fire and intensity values of the fire pixels. The fuzzy output is given to supervisory control and data acquisition (SCADA system to generate suitable analog values for the control valve operation based on fire characteristics. Results with both fire identification and suppression systems have been presented. The proposed method achieves up to 99% of accuracy in fire detection and automated suppression.

  20. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  1. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  2. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    Science.gov (United States)

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  3. Computer vision on color-band resistor and its cost-effective diffuse light source design

    Science.gov (United States)

    Chen, Yung-Sheng; Wang, Jeng-Yau

    2016-11-01

    Color-band resistor possessing specular surface is worthy of studying in the area of color image processing and color material recognition. The specular reflection and halo effects appearing in the acquired resistor image will result in the difficulty of color band extraction and recognition. A computer vision system is proposed to detect the resistor orientation, segment the resistor's main body, extract and identify the color bands, as well as recognize the color code sequence and read the resistor value. The effectiveness of reducing the specular reflection and halo effects are confirmed by several cheap covers, e.g., paper bowl, cup, or box inside pasted with white paper combining with a ring-type LED controlled automatically by the detected resistor orientation. The calibration of the microscope used to acquire the resistor image is described and the proper environmental light intensity is suggested. Experiments are evaluated by 200 4-band and 200 5-band resistors comprising 12 colors used on color-band resistors and show the 90% above correct rate of reading resistor. The performances reported by the failed number of horizontal alignment, color band extraction, color identification, as well as color code sequence flip over checking confirm the feasibility of the presented approach.

  4. Computer-based and web-based applications for night vision goggle training

    Science.gov (United States)

    Ruffner, John W.; Woodward, Kim G.

    2001-08-01

    Night vision goggles (NVGs) can enhance military and civilian operations at night. With this increased capability comes the requirement to provide suitable training. Results from field experience and accident analyses suggest that problems experienced by NVG users can be attributed to a limited understanding of NVG limitations and to perceptual problems. In addition, there is evidence that NVG skills are perishable and require frequent practice. Format training is available to help users obtain the required knowledge and skills. However, there often is insufficient opportunity to obtain and practice perceptual skills prior to using NVGs in the operational environment. NVG users need early and continued exposure to the night environment across a broad range of visual and operational conditions to develop and maintain the necessary knowledge and perceptual skills. NVG training has consisted of classroom instruction, hands-on training, and simulator training. Advances in computer-based training (CBT) and web-based training (WBT) have made these technologies very appealing as additions to the NVG training mix. This paper discusses our efforts to develop NVG training using multimedia, interactive CBT and WBT for NVG training. We discuss how NVG CBT and WBT can be extended to military and civilian ground, maritime, and aviation NVG training.

  5. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition.

    Science.gov (United States)

    Mocanu, Bogdan; Tapu, Ruxandra; Zaharia, Titus

    2016-10-28

    In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  6. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Bogdan Mocanu

    2016-10-01

    Full Text Available In the most recent report published by the World Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.

  7. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Science.gov (United States)

    Tahir, Humza J; Murray, Ian J; Parry, Neil R A; Aslam, Tariq M

    2014-01-01

    Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  8. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing.

    Directory of Open Access Journals (Sweden)

    Humza J Tahir

    Full Text Available Technological advances have led to the development of powerful yet portable tablet computers whose touch-screen resolutions now permit the presentation of targets small enough to test the limits of normal visual acuity. Such devices have become ubiquitous in daily life and are moving into the clinical space. However, in order to produce clinically valid tests, it is important to identify the limits imposed by the screen characteristics, such as resolution, brightness uniformity, contrast linearity and the effect of viewing angle. Previously we have conducted such tests on the iPad 3. Here we extend our investigations to 2 other devices and outline a protocol for calibrating such screens, using standardised methods to measure the gamma function, warm up time, screen uniformity and the effects of viewing angle and screen reflections. We demonstrate that all three devices manifest typical gamma functions for voltage and luminance with warm up times of approximately 15 minutes. However, there were differences in homogeneity and reflectance among the displays. We suggest practical means to optimise quality of display for vision testing including screen calibration.

  9. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  10. UAV and Computer Vision in 3D Modeling of Cultural Heritage in Southern Italy

    Science.gov (United States)

    Barrile, Vincenzo; Gelsomino, Vincenzo; Bilotta, Giuliana

    2017-08-01

    On the Waterfront Italo Falcomatà of Reggio Calabria you can admire the most extensive tract of the walls of the Hellenistic period of ancient city of Rhegion. The so-called Greek Walls are one of the most significant and visible traces of the past linked to the culture of Ancient Greece in the site of Reggio Calabria territory. Over the years this stretch of wall has always been a part, to the reconstruction of Reggio after the earthquake of 1783, the outer walls at all times, restored countless times, to cope with the degradation of the time and the adjustments to the technical increasingly innovative and sophisticated siege. They were the subject of several studies on history, for the study of the construction techniques and the maintenance and restoration of the same. This note describes the methodology for the implementation of a three-dimensional model of the Greek Walls conducted by the Geomatics Laboratory, belonging to DICEAM Department of University “Mediterranea” of Reggio Calabria. 3D modeling we made is based on imaging techniques, such as Digital Photogrammetry and Computer Vision, by using a drone. The acquired digital images were then processed using commercial software Agisoft PhotoScan. The results denote the goodness of the technique used in the field of cultural heritage, attractive alternative to more expensive and demanding techniques such as laser scanning.

  11. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    Science.gov (United States)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  12. Interactive and Audience Adaptive Digital Signage Using Real-Time Computer Vision

    Directory of Open Access Journals (Sweden)

    Robert Ravnik

    2013-02-01

    Full Text Available In this paper we present the development of an interactive, content‐aware and cost‐effective digital signage system. Using a monocular camera installed within the frame of a digital signage display, we employ real‐time computer vision algorithms to extract temporal, spatial and demographic features of the observers, which are further used for observer‐specific broadcasting of digital signage content. The number of observers is obtained by the Viola and Jones face detection algorithm, whilst facial images are registered using multi‐view Active Appearance Models. The distance of the observers from the system is estimated from the interpupillary distance of registered faces. Demographic features, including gender and age group, are determined using SVM classifiers to achieve individual observer‐specific selection and adaption of the digital signage broadcasting content. The developed system was evaluated at the laboratory study level and in a field study performed for audience measurement research. Comparison of our monocular localization module with the Kinect stereo‐system reveals a comparable level of accuracy. The facial characterization module is evaluated on the FERET database with 95% accuracy for gender classification and 92% for age group. Finally, the field study demonstrates the applicability of the developed system in real‐life environments.

  13. Real-Time Evaluation of Breast Self-Examination Using Computer Vision

    Directory of Open Access Journals (Sweden)

    Eman Mohammadi

    2014-01-01

    Full Text Available Breast cancer is the most common cancer among women worldwide and breast self-examination (BSE is considered as the most cost-effective approach for early breast cancer detection. The general objective of this paper is to design and develop a computer vision algorithm to evaluate the BSE performance in real-time. The first stage of the algorithm presents a method for detecting and tracking the nipples in frames while a woman performs BSE; the second stage presents a method for localizing the breast region and blocks of pixels related to palpation of the breast, and the third stage focuses on detecting the palpated blocks in the breast region. The palpated blocks are highlighted at the time of BSE performance. In a correct BSE performance, all blocks must be palpated, checked, and highlighted, respectively. If any abnormality, such as masses, is detected, then this must be reported to a doctor to confirm the presence of this abnormality and proceed to perform other confirmatory tests. The experimental results have shown that the BSE evaluation algorithm presented in this paper provides robust performance.

  14. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  15. Computer vision-guided robotic system for electrical power lines maintenance

    Science.gov (United States)

    Tremblay, Jack; Laliberte, T.; Houde, Regis; Pelletier, Michel; Gosselin, Clement M.; Laurendeau, Denis

    1995-12-01

    The paper presents several modules of a computer vision assisted robotic system for the maintenance of live electrical power lines. The basic scene of interest is composed of generic components such as a crossarm, a power line and a porcelain insulator. The system is under the supervision of an operator which validates each subtask. The system uses a 3D range finder mounted at the end effector of a 6 dof manipulator for the acquisition of range data on the scene. Since more than one view is required to obtain enough information on the scene, a view integration procedure is applied to the data in order to merge the information in a single reference frame. A volumetric description of the scene, in this case an octree, is built using the range data. The octree is transformed into an occupancy grid which is used for avoiding collisions between the manipulator and the components of the scene during the line manipulation step. The collision avoidance module uses the occupancy grid to create a discrete electrostatic potential field representing the various goals (e.g. objects of interest) and obstacles in the scene. The algorithm takes into account the articular limits of the robot and uses a redundant manipulator to ensure that the collision avoidance constraints do not compete with the task which is to reach a given goal with the end-effector. A pose determination algorithm called Iterative Closest Point is presented. The algorithm allows to compute the pose of the various components of the scene and allows the robot to manipulate these components safely. The system has been tested on an actual scene. The manipulation was successfully implemented using a synchronized geometry range finder mounted on a PUMA 760 robot manipulator under the control of Cartool.

  16. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Directory of Open Access Journals (Sweden)

    Shanis Barnard

    Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is

  17. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    Science.gov (United States)

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  18. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    Science.gov (United States)

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  19. FIN 320 UOP Course Tutorial/TutorialRank

    OpenAIRE

    apj

    2015-01-01

    For more course tutorials visit www.tutorialrank.com Tutorial Purchased: 0 Times, Rating: No Rating   What are differences between accounting and finance? What are the roles of financial managers? What are their fiduciary responsibilities? By what ethical standards should they abide?

  20. FIN 320(UOP) UOP Course Tutorial/TutorialRank

    OpenAIRE

    apj

    2015-01-01

    For more course tutorials visit www.tutorialrank.com Tutorial Purchased: 0 Times, Rating: No Rating   What are differences between accounting and finance? What are the roles of financial managers? What are their fiduciary responsibilities? By what ethical standards should they abide?

  1. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.

  2. Image Processing, Computer Vision, and Deep Learning: new approaches to the analysis and physics interpretation of LHC events

    Science.gov (United States)

    Schwartzman, A.; Kagan, M.; Mackey, L.; Nachman, B.; De Oliveira, L.

    2016-10-01

    This review introduces recent developments in the application of image processing, computer vision, and deep neural networks to the analysis and interpretation of particle collision events at the Large Hadron Collider (LHC). The link between LHC data analysis and computer vision techniques relies on the concept of jet-images, building on the notion of a particle physics detector as a digital camera and the particles it measures as images. We show that state-of-the-art image classification techniques based on deep neural network architectures significantly improve the identification of highly boosted electroweak particles with respect to existing methods. Furthermore, we introduce new methods to visualize and interpret the high level features learned by deep neural networks that provide discrimination beyond physics- derived variables, adding a new capability to understand physics and to design more powerful classification methods at the LHC.

  3. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  4. A clinical study on "Computer vision syndrome" and its management with Triphala eye drops and Saptamrita Lauha.

    Science.gov (United States)

    Gangamma, M P; Poonam; Rajagopala, Manjusha

    2010-04-01

    American Optometric Association (AOA) defines computer vision syndrome (CVS) as "Complex of eye and vision problems related to near work, which are experienced during or related to computer use". Most studies indicate that Video Display Terminal (VDT) operators report more eye related problems than non-VDT office workers. The causes for the inefficiencies and the visual symptoms are a combination of individual visual problems and poor office ergonomics. In this clinical study on "CVS", 151 patients were registered, out of whom 141 completed the treatment. In Group A, 45 patients had been prescribed Triphala eye drops; in Group B, 53 patients had been prescribed the Triphala eye drops and SaptamritaLauha tablets internally, and in Group C, 43 patients had been prescribed the placebo eye drops and placebo tablets. In total, marked improvement was observed in 48.89, 54.71 and 06.98% patients in groups A, B and C, respectively.

  5. 计算机视觉中摄像机定标综述%Summarization of Camera Calibration in Computer Vision

    Institute of Scientific and Technical Information of China (English)

    马伟

    2013-01-01

    This paper puts forward the camera calibration method in computer vision, through analysis of principle of computer vision, and analyzes the application of camera calibration methods in computer vision.%本文通过对计算机视觉原理进行分析,提出了计算机视觉中摄像机的定标方法,并分析了计算机视觉中摄像机的定标方法的应用。

  6. Study on a New Technique of On-line Monitoring of Oil Contamination Level Using Computer Vision Technology

    Institute of Scientific and Technical Information of China (English)

    TU Qun-zhang; ZUO Hong-fu

    2004-01-01

    In this paper,a new technique of capturing the images of debris in lubrication or hydraulic oil using micro-imaging and computer vision techniques is introduced.By way of image processing,the size and distribution of debris are obtained,and then the oil contamination level is also obtained.Because the information of oil contamination is obtained directly from the images of debris by this method,the monitoring result is more intuitive and reliable.

  7. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    Science.gov (United States)

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  8. A Review of the Application of Computer Vision to the Inspection and Assessment of Textiles Apparent Properties

    Institute of Scientific and Technical Information of China (English)

    步红刚; 李立轻; 黄秀宝

    2004-01-01

    Due to its advantages of objectiveness, automation, accuracy and fastness in various applications, the technology of computer vision has become one of the studying hotspots in the area of the objective inspection and assessment of textiles apparent properties during the past two decades in the world. Both a brief review of its applications in the recent decade both at home and abroad to the automatic inspection and assessment of the various apparent properties of the textiles, such as yarn, woven fabrics and knitting fabrics, carpet fabrics, nonwoven fabrics and textile webs, etc., and a detailed introduction to the research work including the objective evaluation of fabric wrinkle grade, automatic fabric defects detection and assessment of fabric pilling grade, etc., that was conducted by our research section, i.e., Computer Vision's Textiles Application Research Section, College of Textiles, Dong Hua University, have been provided. Experimental results have proved the feasibilities of the approaches used by us in the applications to the objective inspection and assessment of fabric apparent properties, and also indicated that the technology of computer vision is a power tool for the objective and automatic inspection and assessment of textiles apparent properties, and that it has a bright application future.

  9. Uptodate: tutorial. Desembre 2010

    OpenAIRE

    Universitat de Barcelona. CRAI

    2010-01-01

    Tutorial de consulta de base de dades UpToDate, de medicina clínica basada en l'evidència Uptodate. Proporciona accés sintetitzat a informació mèdica. Conté revisions originals escrites per reconeguts experts que analitzen casos clínics concrets i proporcionen les recomanacions pertinents. Centrat en resoldre qüestions sobre l'atenció als pacients i facilitar la pressa de decisions en la pràctica clínica diària.

  10. "Accelerators and beams," a multimedia tutorial

    Science.gov (United States)

    Silbar, Richard R.

    1997-02-01

    We are developing a computer-based tutorial for charged-particle beam optics under a grant from the DOE. This subject is important to the DOE not only for its use in providing basic research tools but because the physics is the underpinning for accelerators used in industry and medicine. The tutorial, which will be delivered on Macintosh and Windows platforms, uses multimedia techniques to enhance the student's rate of learning and length of retention of the material. As such, it integrates our interactive On-Screen Laboratories™ with hypertext, line drawings, photographs, animation, video, and sound. We are targeting an audience from technicians to graduate students in science and engineering. At this time we have about a fourth of the material (about equivalent to a one-semester three-credit-hour upper under-graduate physics course) available in prototype form.

  11. Tracking the Creation of Tropical Forest Canopy Gaps with UAV Computer Vision Remote Sensing

    Science.gov (United States)

    Dandois, J. P.

    2015-12-01

    The formation of canopy gaps is fundamental for shaping forest structure and is an important component of ecosystem function. Recent time-series of airborne LIDAR have shown great promise for improving understanding of the spatial distribution and size of forest gaps. However, such work typically looks at gap formation across multiple years and important intra-annual variation in gap dynamics remains unknown. Here we present findings on the intra-annual dynamics of canopy gap formation within the 50 ha forest dynamics plot of Barro Colorado Island (BCI), Panama based on unmanned aerial vehicle (UAV) remote sensing. High-resolution imagery (7 cm GSD) over the 50 ha plot was obtained regularly (≈ every 10 days) beginning October 2014 using a UAV equipped with a point and shoot camera. Imagery was processed into three-dimensional (3D) digital surface models (DSMs) using automated computer vision structure from motion / photogrammetric methods. New gaps that formed between each UAV flight were identified by subtracting DSMs between each interval and identifying areas of large deviation. A total of 48 new gaps were detected from 2014-10-02 to 2015-07-23, with sizes ranging from less than 20 m2 to greater than 350 m2. The creation of new gaps was also evaluated across wet and dry seasons with 4.5 new gaps detected per month in the dry season (Jan. - May) and 5.2 per month outside the dry season (Oct. - Jan. & May - July). The incidence of gap formation was positively correlated with ground-surveyed liana stem density (R2 = 0.77, p UAV remote sensing.

  12. Computer vision-based apple grading for golden delicious apples based on surface features

    Directory of Open Access Journals (Sweden)

    Payman Moallem

    2017-03-01

    Full Text Available In this paper, a computer vision-based algorithm for golden delicious apple grading is proposed which works in six steps. Non-apple pixels as background are firstly removed from input images. Then, stem end is detected by combination of morphological methods and Mahalanobis distant classifier. Calyx region is also detected by applying K-means clustering on the Cb component in YCbCr color space. After that, defects segmentation is achieved using Multi-Layer Perceptron (MLP neural network. In the next step, stem end and calyx regions are removed from defected regions to refine and improve apple grading process. Then, statistical, textural and geometric features from refined defected regions are extracted. Finally, for apple grading, a comparison between performance of Support Vector Machine (SVM, MLP and K-Nearest Neighbor (KNN classifiers is done. Classification is done in two manners which in the first one, an input apple is classified into two categories of healthy and defected. In the second manner, the input apple is classified into three categories of first rank, second rank and rejected ones. In both grading steps, SVM classifier works as the best one with recognition rate of 92.5% and 89.2% for two categories (healthy and defected and three quality categories (first rank, second rank and rejected ones, among 120 different golden delicious apple images, respectively, considering K-folding with K = 5. Moreover, the accuracy of the proposed segmentation algorithms including stem end detection and calyx detection are evaluated for two different apple image databases.

  13. OpenVX-based Python Framework for real-time cross platform acceleration of embedded computer vision applications

    Directory of Open Access Journals (Sweden)

    Ori Heimlich

    2016-11-01

    Full Text Available Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.

  14. The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics.

    Science.gov (United States)

    Chinellato, Eris; Del Pobil, Angel P

    2009-06-01

    The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.

  15. Exploration of the Theory Framework of Computer Vision%计算机视觉的理论框架探索

    Institute of Scientific and Technical Information of China (English)

    罗阳倩子

    2015-01-01

    This paper expounds the theory framework of computer vision, analyzes the problems of theory framework of computer vision, and puts forward new development of the theory framework of computer vision to ensure that the scene information obtained through computer vision is more complete.%本文就计算机视觉的理论框架进行阐述,对计算机视觉理论框架存在的问题进行分析,提出计算机视觉理论框架的新发展,以确保通过计算机视觉获得的景物信息更加完整。

  16. The Effectiveness of Interactivity in Multimedia Software Tutorials

    Science.gov (United States)

    Whitman, Lisa

    2013-01-01

    Many people face the challenge of finding effective computer-based software instruction, including employees who must learn how to use software applications for their job and students of distance education classes. Therefore, it is important to conduct research on how computer-based multimedia software tutorials should be designed so they are as…

  17. GOCE User Toolbox and Tutorial

    Science.gov (United States)

    Benveniste, J.; Knudsen, P.

    2013-12-01

    The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Recently, the second version of the GOCE User Toolbox (GUT) was developed to enhance the exploitation of GOCE level 2 data with ERS ENVISAT altimetry. The developments of GUT focused on the following issues: Data Extraction, Generation, Filtering, and Data Save and Restore Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography. The results of the preliminary analysis carried out in this phase of the GUTS project have already demonstrated a significant advance in the ability to determine the ocean's general circulation. The improved gravity models provided by the GOCE mission have enhanced the resolution and sharpened the boundaries of those features compared with earlier satellite only solutions. Calculation of the geostrophic surface currents from the MDT reveals improvements for all of the ocean's major current systems.

  18. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    Science.gov (United States)

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly.

  19. Developing Problem-Solving Skills of Students Taking Introductory Physics via Web-Based Tutorials

    Science.gov (United States)

    Singh, Chandralekha; Haileselassie, Daniel

    2010-01-01

    Science teaching and learning can be made both engaging and student-centered using pedagogical, computer-based learning tools. We have developed self-paced interactive problem-solving tutorials for introductory physics. These tutorials can provide guidance and support for a variety of problem-solving techniques, as well as opportunities for…

  20. Developing Problem-Solving Skills of Students Taking Introductory Physics via Web-Based Tutorials

    Science.gov (United States)

    Singh, Chandralekha; Haileselassie, Daniel

    2010-01-01

    Science teaching and learning can be made both engaging and student-centered using pedagogical, computer-based learning tools. We have developed self-paced interactive problem-solving tutorials for introductory physics. These tutorials can provide guidance and support for a variety of problem-solving techniques, as well as opportunities for…

  1. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  2. The MueLu Tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Jonathan Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wiesner, Tobias A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Prokopenko, Andrey [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gee, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    The MueLu tutorial is written as a hands-on tutorial for MueLu, the next generation multigrid framework in Trilinos. It covers the whole spectrum from absolute beginners’ topics to expert level. Since the focus of this tutorial is on practical and technical aspects of multigrid methods in general and MueLu in particular, the reader is expected to have a basic understanding of multigrid methods and its general underlying concepts. Please refer to multigrid textbooks (e.g. [1]) for the theoretical background.

  3. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  4. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  5. GOCE User Toolbox and Tutorial

    Science.gov (United States)

    Benveniste, Jérôme; Knudsen, Per

    2016-07-01

    The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. Without any doubt the development of the GOCE user toolbox have played a major role in paving the way to successful use of the GOCE data for oceanography. The GUT version 2.2 was released in April 2014 and beside some bug-fixes it adds the capability for the computation of Simple Bouguer Anomaly (Solid-Earth). During this fall a new GUT version 3 has been released. GUTv3 was further developed through a collaborative effort where the scientific communities participate aiming on an implementation of remaining functionalities facilitating a wider span of research in the fields of Geodesy, Oceanography and Solid earth studies. Accordingly, the GUT version 3 has: - An attractive and easy to use Graphic User Interface (GUI) for the toolbox, - Enhance the toolbox with some further software functionalities such as to facilitate the use of gradients, anisotropic diffusive filtering and computation of Bouguer and isostatic gravity anomalies. - An associated GUT VCM tool for analyzing the GOCE variance covariance matrices.

  6. Indico CONFERENCE tutorial

    CERN Document Server

    CERN. Geneva; Manzoni, Alex Marc

    2017-01-01

    This short tutorial explains how to create a CONFERENCE in indico and how to handle abstracts and registration forms, in detail: Timestamps: 1:01 - Programme  2:28 - Call for abstracts  11:50 - Abstract submission  13:41 - Abstract Review 15:41 - The Judge's Role 17:23 - Registration forms' creation 23:34 - Candidate participant's registration/application 25:54 - Customisation of Indico pages - Layout 28:08 - Customisation of Indico pages - Menus 29:47 - Configuring Event reminders and import into calendaring tools   See HERE a recent presentation by Pedro about the above steps in the life of an indico CONFERENCE event.

  7. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Science.gov (United States)

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  8. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  9. Trilinos 4.0 tutorial.

    Energy Technology Data Exchange (ETDEWEB)

    Sala, Marzio; Day, David Minot; Heroux, Michael Allen

    2004-05-01

    The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. The goal of the Trilinos Project is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multiphysics engineering and scientific applications. The emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all the abstract interfaces. This document introduces the use of Trilinos, version 4.0. The presented material includes, among others, the definition of distributed matrices and vectors with Epetra, the iterative solution of linear systems with AztecOO, incomplete factorizations with IF-PACK, multilevel and domain decomposition preconditioners with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. The tutorial is a self-contained introduction, intended to help computational scientists effectively apply the appropriate Trilinos package to their applications. Basic examples are presented that are fit to be imitated. This document is a companion to the Trilinos User's Guide [20] and Trilinos Development Guides [21,22]. Please note that the documentation included in each of the Trilinos' packages is of fundamental importance.

  10. Comparability of the performance of in-line computer vision for geometrical verification of parts, produced by Additive Manufacturing

    DEFF Research Database (Denmark)

    Pedersen, David B.; Hansen, Hans N.

    2014-01-01

    in order to verify geometrical tolerances, The paper addresses to which precision, tolerance verification has been achieved, by assessing the reconstruction capability against reference 3D scanning by a selected number of AM processes. Geometrical verification was achieved down to a precision of 20μm......-customized parts with narrow geometrical tolerances require individual verification whereas many hyper-complex parts simply cannot be measured by traditional means such as by optical or mechanical measurement tools. This paper address the challenge by detailing how in-line computer vision has been employed...

  11. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    Science.gov (United States)

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  12. Computer vision: automating DEM generation of active lava flows and domes from photos

    Science.gov (United States)

    James, M. R.; Varley, N. R.; Tuffen, H.

    2012-12-01

    Accurate digital elevation models (DEMs) form fundamental data for assessing many volcanic processes. We present a photo-based approach developed within the computer vision community to produce DEMs from a consumer-grade digital camera and freely available software. Two case studies, based on the Volcán de Colima lava dome and the Puyehue Cordón-Caulle obsidian flow, highlight the advantages of the technique in terms of the minimal expertise required, the speed of data acquisition and the automated processing involved. The reconstruction procedure combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) and can generate dense 3D point clouds (millions of points) from multiple photographs of a scene taken from different positions. Processing is carried out by automated software (e.g. http://blog.neonascent.net/archives/bundler-photogrammetry-package/). SfM-MVS reconstructions are initally un-scaled and un-oriented so additional geo-referencing software has been developed. Although this step requires the presence of some control points, the SfM-MVS approach has significantly easier image acquisition and control requirements than traditional photogrammetry, facilitating its use in a broad range of difficult environments. At Colima, the lava dome surface was reconstructed from recent and archive images taken from light aircraft over flights (2007-2011). Scaling and geo-referencing was carried out using features identified in web-sourced ortho-imagery obtained as a basemap layer in ArcMap - no ground-based measurements were required. Average surface measurement densities are typically 10-40 points per m2. Over mean viewing distances of ~500-2500 m (for different surveys), RMS error on the control features is ~1.5 m. The derived DEMs (with 1-m grid resolution) are sufficient to quantify volumetric change, as well as to highlight the structural evolution of the upper surface of the dome following an explosion in June 2011. At Puyehue Cord

  13. Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach

    CERN Document Server

    Diamant, Emanuel

    2007-01-01

    We live in the Information Age, and information has become a critically important component of our life. The success of the Internet made huge amounts of it easily available and accessible to everyone. To keep the flow of this information manageable, means for its faultless circulation and effective handling have become urgently required. Considerable research efforts are dedicated today to address this necessity, but they are seriously hampered by the lack of a common agreement about "What is information?" In particular, what is "visual information" - human's primary input from the surrounding world. The problem is further aggravated by a long-lasting stance borrowed from the biological vision research that assumes human-like information processing as an enigmatic mix of perceptual and cognitive vision faculties. I am trying to find a remedy for this bizarre situation. Relying on a new definition of "information", which can be derived from Kolmogorov's compexity theory and Chaitin's notion of algorithmic inf...

  14. Low Vision

    Science.gov (United States)

    ... HHS USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the ... Ethnicity 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for ...

  15. A computer vision integration model for a multi-modal cognitive system

    OpenAIRE

    Vrecko A.; Skocaj D.; Hawes N.; Leonardis A.

    2009-01-01

    We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can combine an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanis...

  16. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  17. Computer vision system approach in colour measurements of foods: Part II. validation of methodology with real foods

    Directory of Open Access Journals (Sweden)

    Fatih TARLAK

    2016-01-01

    Full Text Available Abstract The colour of food is one of the most important factors affecting consumers’ purchasing decision. Although there are many colour spaces, the most widely used colour space in the food industry is L*a*b* colour space. Conventionally, the colour of foods is analysed with a colorimeter that measures small and non-representative areas of the food and the measurements usually vary depending on the point where the measurement is taken. This leads to the development of alternative colour analysis techniques. In this work, a simple and alternative method to measure the colour of foods known as “computer vision system” is presented and justified. With the aid of the computer vision system, foods that are homogenous and uniform in colour and shape could be classified with regard to their colours in a fast, inexpensive and simple way. This system could also be used to distinguish the defectives from the non-defectives. Quality parameters of meat and dairy products could be monitored without any physical contact, which causes contamination during sampling.

  18. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    Science.gov (United States)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  19. Computer Vision Tools for Low-Cost and Noninvasive Measurement of Autism-Related Behaviors in Infants

    Directory of Open Access Journals (Sweden)

    Jordan Hashemi

    2014-01-01

    Full Text Available The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a child's natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI. In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinician's behavioral observations obtained from real in-clinic assessments.

  20. Hadoop Tutorials - Hadoop Foundations

    CERN Document Server

    CERN. Geneva; Lanza Garcia, Daniel

    2016-01-01

    The Hadoop ecosystem is the leading opensource platform for distributed storage and processing of "big data". The Hadoop platform is available at CERN as a central service provided by the IT department. This tutorial organized by the IT Hadoop service, aims to introduce the main concepts about Hadoop technology in a practical way and is targeted to those who would like to start using the service for distributed parallel data processing. The main topics that will be covered are: Hadoop architecture and available components How to perform distributed parallel processing in order to explore and create reports with SQL (with Apache Impala) on example data. Using a HUE - Hadoop web UI for presenting the results in user friendly way. How to format and/or structure data in order to make data processing more efficient - by using various data formats/containers and partitioning techniques (Avro, Parquet, HBase). ...

  1. CVED--计算机视觉教育数字图书馆%CVED-A Digital Library Collection for Computer Vision Education

    Institute of Scientific and Technical Information of China (English)

    刘燕权; 王凌云; 刘莎

    2013-01-01

    Computer Vision Education Digital Library (CVED) is an at empt to bring col ective educational successes and capabilities together into a comprehensive digital library col ection for computer vision education. It contains links to computer vision courses around the world, links to and evaluations of textbooks, and links to assignments and data sets provided by computer vision educators. This paper reviews its major development, col ection organization, special services provided. Author's comments and suggestions are also given.%计算机视觉教育数字图书馆(Computer Vision Education Digital,CVED)是美国国家科学数字图书馆(NSDL)的一个门户项目,它旨在提供一个既包含各种计算机视觉教育数字资源,同时又能使相关人员贡献、共享计算机视觉教育资源的一个平台,代表了未来学科教育数字图书馆的发展方向。文章对该数字图书馆的建设以及现状进行了详尽的评析,包括项目概述、资源组织、目标用户与特色栏目,并给出了作者的评价与建议。

  2. Online Tutorials and Effective Information Literacy Instruction for Distance Learners

    Science.gov (United States)

    Gonzales, Brighid M.

    2014-01-01

    As Internet and computer technologies have evolved, libraries have incorporated these technologies into the delivery of information literacy instruction. Of particular benefit is the ability of online tutorials to deliver information literacy instruction to students not physically present on campus. A survey of library and information science…

  3. LHC@home online tutorial for Windows users - recording

    CERN Document Server

    CERN. Geneva

    2016-01-01

    A step-by-step online tutorial about LHC@home for Windows users by Karolina Bozek. It contains detailed instructions on how-to-join this volunteer computing project.  This 5' video is linked from http://lhcathome.web.cern.ch/join-us Also from the CDS e-learning category.

  4. LHC@home online tutorial for Linux users - recording

    CERN Document Server

    CERN. Geneva

    2016-01-01

    A step-by-step online tutorial for LHC@home by Karolina Bozek It contains detailed instructions for Linux users on how-to-join this volunteer computing project.  This 5' linked from http://lhcathome.web.cern.ch/join-us CLICK Here to see the commands to copy/paste for installing BOINC and the VirtualBox.

  5. Quantitative Microbial Risk Assessment Tutorial - Primer

    Science.gov (United States)

    This document provides a Quantitative Microbial Risk Assessment (QMRA) primer that organizes QMRA tutorials. The tutorials describe functionality of a QMRA infrastructure, guide the user through software use and assessment options, provide step-by-step instructions for implementi...

  6. Sigma: computer vision in the service of safety and reliability in the inspection services; Sigma: la vision computacional al servicio de la seguridad y fiabilidad en los servicios de inspeccion

    Energy Technology Data Exchange (ETDEWEB)

    Pineiro, P. J.; Mendez, M.; Garcia, A.; Cabrera, E.; Regidor, J. J.

    2012-11-01

    Vision Computing is growing very fast in the last decade with very efficient tools and algorithms. This allows new development of applications in the nuclear field providing more efficient equipment and tasks: redundant systems, vision-guided mobile robots, automated visual defects recognition, measurement, etc., In this paper Tecnatom describes a detailed example of visual computing application developed to provide secure redundant identification of the thousands of tubes existing in a power plant steam generator. some other on-going or planned visual computing projects by Tecnatom are also introduced. New possibilities of application in the inspection systems for nuclear components appear where the main objective is to maximize their reliability. (Author) 6 refs.

  7. A malaria diagnostic tool based on computer vision screening and visualization of Plasmodium falciparum candidate areas in digitized blood smears.

    Directory of Open Access Journals (Sweden)

    Nina Linder

    Full Text Available INTRODUCTION: Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. METHODS: Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27 and uninfected controls (n = 20 were digitally scanned with an oil immersion objective (0.1 µm/pixel to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. RESULTS: The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls. From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. CONCLUSION: We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for

  8. Advocacy and IPR, tutorial 4

    CERN Document Server

    CERN. Geneva

    2005-01-01

    With open access and repositories assuming a high profile some may question whether advocacy is still necessary. Those involved in the business of setting up and populating repositories are aware that in the majority of institutions there is still a great need for advocacy. This tutorial will give participants an opportunity to discuss different advocacy methods and approaches, including the 'top down' and 'bottom up' approach, publicity methods and the opportunities offered by funding body positions on open access. Participants will have the opportunity to share experiences of what works and what doesn't. The advocacy role often encompasses responsibility for advising academics on IPR issues. This is a particularly critical area where repository staff are engaged in depositing content on behalf of academics. The tutorial will offer an opportunity to discuss the IPR issues encountered by those managing repositories. The tutorial will draw on the experience of participants who have been engaged in advocacy act...

  9. Lambda Vision

    Science.gov (United States)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  10. Computer program TRACK_VISION for simulating optical appearance of etched tracks in CR-39 nuclear track detectors

    Science.gov (United States)

    Nikezic, D.; Yu, K. N.

    2008-04-01

    A computer program called TRACK_VISION for determining the optical appearances of tracks in nuclear track materials resulted from light-ion irradiation and subsequent chemical etching was described. A previously published software, TRACK_TEST, was the starting point for the present software TRACK_VISION, which contained TRACK_TEST as its subset. The programming steps were outlined. Descriptions of the program were given, including the built-in V functions for the commonly employed nuclear track material commercially known as CR-39 (polyallyldiglycol carbonate) irradiated by alpha particles. Program summaryProgram title: TRACK_VISION Catalogue identifier: AEAF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4084 No. of bytes in distributed program, including test data, etc.: 71 117 Distribution format: tar.gz Programming language: Fortran 90 Computer: Pentium PC Operating system: Windows 95+ RAM: 256 MB Classification: 17.5, 18 External routines: The entire code must be linked with the MSFLIB library. MSFLib is a collection of C and C++ modules which provides a general framework for processing IBM's AFP datastream. MSFLIB is specific to Visual Fortran (Digital, Compaq or Intel flavors). Nature of problem: Nuclear track detectors are commonly used for radon measurements through studying the tracks generated by the incident alpha particles. Optical microscopes are often used for this purpose but the process is relatively tedious and time consuming. Several automatic and semi-automatic systems have been developed in order to facilitate determination of track densities. In all these automatic systems, the optical appearance of the tracks is important. However, not much has been done so far to obtaining the

  11. Tutorials in complex photonic media

    CERN Document Server

    Noginov, Mikhail A; McCall, Martin W; Zheludev, Nikolay I

    2010-01-01

    The field of complex photonic media encompasses many leading-edge areas in physics, chemistry, nanotechnology, materials science, and engineering. In Tutorials in Complex Photonic Media , leading experts have brought together 19 tutorials on breakthroughs in modern optics, such as negative refraction, chiral media, plasmonics, photonic crystals, and organic photonics. This text will help students, engineers, and scientists entering the field to become familiar with the interrelated aspects of the subject. It also serves well as a supplemental text in introductory and advanced courses on optica

  12. Mail2Print online tutorial

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Mail2print is a feature which allows you to send documents to a printer by mail. This tutorial (text attached to the event page) explains how to use this service. Content owner: Vincent Nicolas Bippus Presenter: Pedro Augusto de Freitas Batista Tell us what you think via e-learning.support at cern.ch More tutorials in the e-learning collection of the CERN Document Server (CDS) https://cds.cern.ch/collection/E-learning%20modules?ln=en All info about the CERN rapid e-learning project is linked from http://twiki.cern.ch/ELearning  

  13. Perspectives and Visions of Computer Science Education in Primary and Secondary (K-12) Schools

    Science.gov (United States)

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.; Mittermeir, Roland T.

    2014-01-01

    In view of the recent developments in many countries, for example, in the USA and in the UK, it appears that computer science education (CSE) in primary or secondary schools (K-12) has reached a significant turning point, shifting its focus from ICT-oriented to rigorous computer science concepts. The goal of this special issue is to offer a…

  14. Perspectives and Visions of Computer Science Education in Primary and Secondary (K-12) Schools

    Science.gov (United States)

    Hubwieser, Peter; Armoni, Michal; Giannakos, Michail N.; Mittermeir, Roland T.

    2014-01-01

    In view of the recent developments in many countries, for example, in the USA and in the UK, it appears that computer science education (CSE) in primary or secondary schools (K-12) has reached a significant turning point, shifting its focus from ICT-oriented to rigorous computer science concepts. The goal of this special issue is to offer a…

  15. VibroCV: a computer vision-based vibroarthrography platform with possible application to Juvenile Idiopathic Arthritis.

    Science.gov (United States)

    Wiens, Andrew D; Prahalad, Sampath; Inan, Omer T

    2016-08-01

    Vibroarthrography, a method for interpreting the sounds emitted by a knee during movement, has been studied for several joint disorders since 1902. However, to our knowledge, the usefulness of this method for management of Juvenile Idiopathic Arthritis (JIA) has not been investigated. To study joint sounds as a possible new biomarker for pediatric cases of JIA we designed and built VibroCV, a platform to capture vibroarthrograms from four accelerometers; electromyograms (EMG) and inertial measurements from four wireless EMG modules; and joint angles from two Sony Eye cameras and six light-emitting diodes with commercially-available off-the-shelf parts and computer vision via OpenCV. This article explains the design of this turn-key platform in detail, and provides a sample recording captured from a pediatric subject.

  16. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    Energy Technology Data Exchange (ETDEWEB)

    MacDougall, R.D.; Scherrer, B [Boston Children’s Hospital, Boston, MA (United States); Don, S [Washington University, St. Louis, MO (United States)

    2016-06-15

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient. The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.

  17. Historical review of tutorial in education

    Directory of Open Access Journals (Sweden)

    María Gabriela Luna Pérez

    2015-01-01

    Full Text Available For centuries, tutorials have always been of an individual character in the history of education. The paper reviews how tutorials in education have evolved from ancient Greece to the present by considering taking into account the following aspects: a its general understanding, b the favorite areas of orientation c the role of learning guiding process d the supporting role of tutorials. We offer a historical account of tutorials development in Mexican Education. The study provides the main trends of tutorial activities in primary education, the evidence confirmed that tutoring has evolved from the learning of philosophical and ethical questions to the multiple learning involving competencies.

  18. Online Searching in PBL Tutorials

    Science.gov (United States)

    Jin, Jun; Bridges, Susan M.; Botelho, Michael G.; Chan, Lap Ki

    2015-01-01

    This study aims to explore how online searching plays a role during PBL tutorials in two undergraduate health sciences curricula, Medicine and Dentistry. Utilizing Interactional Ethnography (IE) as an organizing framework for data collection and analysis, and drawing on a critical theory of technology as an explanatory lens, enabled a textured…

  19. Web-tutorials in context

    DEFF Research Database (Denmark)

    Lund, Haakon; Pors, Niels Ole

    2012-01-01

    Purpose – The purpose of the research is to investigate Norwegian web‐tutorials in contexts consisting of organizational issues and different forms of usability in relation to students’ perception and use of the system. Further, the research investigates the usefulness of the concepts concerning...

  20. Tutorial Instruction in Science Education

    Directory of Open Access Journals (Sweden)

    Rhea Miles

    2015-06-01

    Full Text Available The purpose of the study is to examine the tutorial practices of in-service teachers to address the underachievement in the science education of K-12 students. Method: In-service teachers in Virginia and North Carolina were given a survey questionnaire to examine how they tutored students who were in need of additional instruction. Results: When these teachers were asked, “How do you describe a typical one-on-one science tutorial session?” the majority of their responses were categorized as teacher-directed. Many of the teachers would provide a science tutorial session for a student after school for 16-30 minutes, one to three times a week. Respondents also indicated they would rely on technology, peer tutoring, scientific inquiry, or themselves for one-on-one science instruction. Over half of the in-service teachers that responded to the questionnaire stated that they would never rely on outside assistance, such as a family member or an after school program to provide tutorial services in science. Additionally, very few reported that they incorporated the ethnicity, culture, or the native language of ELL students into their science tutoring sessions.

  1. Online Searching in PBL Tutorials

    Science.gov (United States)

    Jin, Jun; Bridges, Susan M.; Botelho, Michael G.; Chan, Lap Ki

    2015-01-01

    This study aims to explore how online searching plays a role during PBL tutorials in two undergraduate health sciences curricula, Medicine and Dentistry. Utilizing Interactional Ethnography (IE) as an organizing framework for data collection and analysis, and drawing on a critical theory of technology as an explanatory lens, enabled a textured…

  2. Hypermedia 1990 structured Hypertext tutorial

    Science.gov (United States)

    Johnson, J. Scott

    1990-01-01

    Hypermedia 1990 structured Hypertext tutorial is presented in the form of view-graphs. The following subject areas are covered: structured hypertext; analyzing hypertext documents for structure; designing structured hypertext documents; creating structured hypertext applications; structuring service and repair documents; maintaining structured hypertext documents; and structured hypertext conclusion.

  3. Solar Tutorial and Annotation Resource (STAR)

    Science.gov (United States)

    Showalter, C.; Rex, R.; Hurlburt, N. E.; Zita, E. J.

    2009-12-01

    We have written a software suite designed to facilitate solar data analysis by scientists, students, and the public, anticipating enormous datasets from future instruments. Our “STAR" suite includes an interactive learning section explaining 15 classes of solar events. Users learn software tools that exploit humans’ superior ability (over computers) to identify many events. Annotation tools include time slice generation to quantify loop oscillations, the interpolation of event shapes using natural cubic splines (for loops, sigmoids, and filaments) and closed cubic splines (for coronal holes). Learning these tools in an environment where examples are provided prepares new users to comfortably utilize annotation software with new data. Upon completion of our tutorial, users are presented with media of various solar events and asked to identify and annotate the images, to test their mastery of the system. Goals of the project include public input into the data analysis of very large datasets from future solar satellites, and increased public interest and knowledge about the Sun. In 2010, the Solar Dynamics Observatory (SDO) will be launched into orbit. SDO’s advancements in solar telescope technology will generate a terabyte per day of high-quality data, requiring innovation in data management. While major projects develop automated feature recognition software, so that computers can complete much of the initial event tagging and analysis, still, that software cannot annotate features such as sigmoids, coronal magnetic loops, coronal dimming, etc., due to large amounts of data concentrated in relatively small areas. Previously, solar physicists manually annotated these features, but with the imminent influx of data it is unrealistic to expect specialized researchers to examine every image that computers cannot fully process. A new approach is needed to efficiently process these data. Providing analysis tools and data access to students and the public have proven

  4. Computer Vision Methods for Improved Mobile Robot State Estimation in Challenging Terrains

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2006-11-01

    Full Text Available External perception based on vision plays a critical role in developing improved and robust localization algorithms, as well as gaining important information about the vehicle and the terrain it is traversing. This paper presents two novel methods for rough terrain-mobile robots, using visual input. The first method consists of a stereovision algorithm for real-time 6DoF ego-motion estimation. It integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP scheme. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually referred to as visual odometry. The second method aims at estimating the wheel sinkage of a mobile robot on sandy soil, based on edge detection strategy. A semi-empirical model of wheel sinkage is also presented referring to the classical terramechanics theory. Experimental results obtained with an all-terrain mobile robot and with a wheel sinkage test bed are presented to validate our approach. It is shown that the proposed techniques can be integrated in control and planning algorithms to improve the performance of ground vehicles operating in uncharted environments.

  5. An automatic colour-based computer vision algorithm for tracking the position of piglets

    Energy Technology Data Exchange (ETDEWEB)

    Navarro-Jover, J. M.; Alcaniz-Raya, M.; Gomez, V.; Balasch, S.; Moreno, J. R.; Grau-Colomer, V.; Torres, A.

    2009-07-01

    Artificial vision is a powerful observation tool for research in the field of livestock production. So, based on the search and recognition of colour spots in images, a digital image processing system which permits the detection of the position of piglets in a farrowing pen, was developed. To this end, 24,000 images were captured over five takes (days), with a five-second interval between every other image. The nine piglets in a litter were marked on their backs and sides with different coloured spray paints each one, placed at a considerable distance on the RGB space. The programme requires the user to introduce the colour patterns to be found, and the output is an ASCII file with the positions (column X, lineY) for each of these marks within the image analysed. This information may be extremely useful for further applications in the study of animal behaviour and welfare parameters (huddling, activity, suckling, etc.). The software programme initially segments the image in the RGB colour space to separate the colour marks from the rest of the image, and then recognises the colour patterns, using another colour space [B/(R+G+B), (G-R), (B-G)] more suitable for this purpose. This additional colour space was obtained testing different colour combinations derived from R, G and B. The statistical evaluation of the programmes performance revealed an overall 72.5% in piglet detection, 89.1% of this total being correctly detected. (Author) 33 refs.

  6. Fractographic classification in metallic materials by using 3D processing and computer vision techniques

    Directory of Open Access Journals (Sweden)

    Maria Ximena Bastidas-Rodríguez

    2016-09-01

    Full Text Available Failure analysis aims at collecting information about how and why a failure is produced. The first step in this process is a visual inspection on the flaw surface that will reveal the features, marks, and texture, which characterize each type of fracture. This is generally carried out by personnel with no experience that usually lack the knowledge to do it. This paper proposes a classification method for three kinds of fractures in crystalline materials: brittle, fatigue, and ductile. The method uses 3D vision, and it is expected to support failure analysis. The features used in this work were: i Haralick’s features and ii the fractal dimension. These features were applied to 3D images obtained from a confocal laser scanning microscopy Zeiss LSM 700. For the classification, we evaluated two classifiers: Artificial Neural Networks and Support Vector Machine. The performance evaluation was made by extracting four marginal relations from the confusion matrix: accuracy, sensitivity, specificity, and precision, plus three evaluation methods: Receiver Operating Characteristic space, the Individual Classification Success Index, and the Jaccard’s coefficient. Despite the classification percentage obtained by an expert is better than the one obtained with the algorithm, the algorithm achieves a classification percentage near or exceeding the 60 % accuracy for the analyzed failure modes. The results presented here provide a good approach to address future research on texture analysis using 3D data.

  7. METHODS OF ASSESSING THE DEGREE OF DESTRUCTION OF RUBBER PRODUCTS USING COMPUTER VISION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2015-01-01

    Full Text Available For technical inspection of rubber products are essential methods of improving video scopes analyzing the degree of destruction and aging of rubber in an aggressive environment. The main factor determining the degree of destruction of the rubber product, the degree of coverage is cracked, which can be described as the amount of the total area, perimeter cracks, geometric shapes and other parameters. In the process of creating a methodology for assessing the degree of destruction of rubber products arises the problem of the development of machine vision algorithm for estimating the degree of coverage of the sample fractures and fracture characterization. For the development of image processing algorithm performed experimental studies on the artificial aging of several samples of products that are made from different rubbers. In the course of the experiments it was obtained several samples of shots vulcanizates in real time. To achieve the goals initially made light stabilization of array images using Gaussian filter. Thereafter, for each image binarization operation is applied. To highlight the contours of the surface damage of the sample is used Canny algorithm. The detected contours are converted into an array of pixels. However, a crack may be allocated to several contours. Therefore, an algorithm was developed by combining contours criterion of minimum distance between them. At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the Minkowski dimension. Show schedule obtained by the method parameters destruction of samples of rubber products. The developed method allows you to automate assessment of the degree of aging of rubber products in telemetry systems, to study the dynamics of the aging process of polymers to

  8. On the application of connectionist models for pattern recognition, robotics and computer vision: A technical report

    NARCIS (Netherlands)

    Kraaijveld, M.A.

    1989-01-01

    Connectionist modeis, commonly referred to as neural networks, are computing models in which large numbers of processing units are connected to each other with variable "weight". These weight values represent the "strength" of the connection between two units, which can be positive (excitatory, i.e.

  9. Computer Vision Syndrome for Non-Native Speaking Students: What Are the Problems with Online Reading?

    Science.gov (United States)

    Tseng, Min-chen

    2014-01-01

    This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…

  10. On the application of connectionist models for pattern recognition, robotics and computer vision: A technical report

    NARCIS (Netherlands)

    Kraaijveld, M.A.

    1989-01-01

    Connectionist modeis, commonly referred to as neural networks, are computing models in which large numbers of processing units are connected to each other with variable "weight". These weight values represent the "strength" of the connection between two units, which can be positive (excitatory, i.e.

  11. A Computer Vision System forLocating and Identifying Internal Log Defects Using CT Imagery

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Frederick Lamb; Philip A. Araman

    1991-01-01

    A number of researchers have shown the ability of magnetic resonance imaging (MRI) and computer tomography (CT) imaging to detect internal defects in logs. However, if these devices are ever to play a role in the forest products industry, automatic methods for analyzing data from these devices must be developed. This paper reports research aimed at developing a...

  12. Computer Vision Syndrome for Non-Native Speaking Students: What Are the Problems with Online Reading?

    Science.gov (United States)

    Tseng, Min-chen

    2014-01-01

    This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…

  13. Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis

    Science.gov (United States)

    Markovic, Marko; Dosen, Strahinja; Popovic, Dejan; Graimann, Bernhard; Farina, Dario

    2015-12-01

    Objective. Myoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control. Approach. We developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living. Main results. The CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training. Significance. The CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees.

  14. Development of a Configurable Growth Chamber with a Computer Vision System to Study Circadian Rhythm in Plants

    Directory of Open Access Journals (Sweden)

    Marcos Egea-Cortines

    2012-11-01

    Full Text Available Plant development is the result of an endogenous morphogenetic program that integrates environmental signals. The so-called circadian clock is a set of genes that integrates environmental inputs into an internal pacing system that gates growth and other outputs. Study of circadian growth responses requires high sampling rates to detect changes in growth and avoid aliasing. We have developed a flexible configurable growth chamber comprising a computer vision system that allows sampling rates ranging between one image per 30 s to hours/days. The vision system has a controlled illumination system, which allows the user to set up different configurations. The illumination system used emits a combination of wavelengths ensuring the optimal growth of species under analysis. In order to obtain high contrast of captured images, the capture system is composed of two CCD cameras, for day and night periods. Depending on the sample type, a flexible image processing software calculates different parameters based on geometric calculations. As a proof of concept we tested the system in three different plant tissues, growth of petunia- and snapdragon (Antirrhinum majus flowers and of cladodes from the cactus Opuntia ficus-indica. We found that petunia flowers grow at a steady pace and display a strong growth increase in the early morning, whereas Opuntia cladode growth turned out not to follow a circadian growth pattern under the growth conditions imposed. Furthermore we were able to identify a decoupling of increase in area and length indicating that two independent growth processes are responsible for the final size and shape of the cladode.

  15. Development of a configurable growth chamber with a computer vision system to study circadian rhythm in plants.

    Science.gov (United States)

    Navarro, Pedro J; Fernández, Carlos; Weiss, Julia; Egea-Cortines, Marcos

    2012-11-09

    Plant development is the result of an endogenous morphogenetic program that integrates environmental signals. The so-called circadian clock is a set of genes that integrates environmental inputs into an internal pacing system that gates growth and other outputs. Study of circadian growth responses requires high sampling rates to detect changes in growth and avoid aliasing. We have developed a flexible configurable growth chamber comprising a computer vision system that allows sampling rates ranging between one image per 30 s to hours/days. The vision system has a controlled illumination system, which allows the user to set up different configurations. The illumination system used emits a combination of wavelengths ensuring the optimal growth of species under analysis. In order to obtain high contrast of captured images, the capture system is composed of two CCD cameras, for day and night periods. Depending on the sample type, a flexible image processing software calculates different parameters based on geometric calculations. As a proof of concept we tested the system in three different plant tissues, growth of petunia- and snapdragon (Antirrhinum majus) flowers and of cladodes from the cactus Opuntia ficus-indica. We found that petunia flowers grow at a steady pace and display a strong growth increase in the early morning, whereas Opuntia cladode growth turned out not to follow a circadian growth pattern under the growth conditions imposed. Furthermore we were able to identify a decoupling of increase in area and length indicating that two independent growth processes are responsible for the final size and shape of the cladode.

  16. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030

    Science.gov (United States)

    Slotnick, Jeffrey P.; Khodadoust, Abdollah; Alonso, Juan J.; Darmofal, David L.; Gropp, William D.; Lurie, Elizabeth A.; Mavriplis, Dimitri J.; Venkatakrishnan, Venkat

    2014-01-01

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be ‘cleaner’ and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. PMID:25024413

  17. Science Information Literacy Tutorials and Pedagogy

    Directory of Open Access Journals (Sweden)

    Ping Li

    2011-06-01

    Full Text Available Objective – This study examined information literacy tutorials in science. The goals of the research were to identify which of the information literacy standards for science, engineering and technology were addressed in the tutorials, and the extent that the tutorials incorporated good pedagogical elements.Methods – The researcher chose for review 31 of the tutorials selected by members of the ACRL Science & Technology Section (STS Information Literacy Committee. She carefully analyzed the tutorials and developed a database with codes for the topic of each tutorial, the STS information literacy standard(s addressed by the tutorial, and whether good pedagogical elements were incorporated. The entire analysis and coding procedure was repeated three times to ensure consistency.Results – The tutorials analyzed in this study covered various subjects and addressed all the (STS information literacy standards. The tutorials presented information clearly and allowed users to select their own learning paths. The incorporation of good pedagogical elements was limited, especially in relation to active learning elements.Conclusions – Web tutorials have been accepted as effective information literacy instruction tools and have been used to teach all elements of the STS information literacy standards. Yet, ensuring they provide a real learning experience for students remains a challenge. More serious thought needs to be given to integrating good pedagogy into these instructional tools in order to attain deep learning.

  18. An interactive tutorial-based training technique for vertebral morphometry.

    Science.gov (United States)

    Gardner, J C; von Ingersleben, G; Heyano, S L; Chesnut, C H

    2001-01-01

    The purpose of this work was to develop a computer-based procedure for training technologists in vertebral morphometry. The utility of the resulting interactive, tutorial based training method was evaluated in this study. The training program was composed of four steps: (1) review of an online tutorial, (2) review of analyzed spine images, (3) practice in fiducial point placement and (4) testing. During testing, vertebral heights were measured from digital, lateral spine images containing osteoporotic fractures. Inter-observer measurement precision was compared between research technicians, and between technologists and radiologist. The technologists participating in this study had no prior experience in vertebral morphometry. Following completion of the online training program, good inter-observer measurement precision was seen between technologists, showing mean coefficients of variation of 2.33% for anterior, 2.87% for central and 2.65% for posterior vertebral heights. Comparisons between the technicians and radiologist ranged from 2.19% to 3.18%. Slightly better precision values were seen with height measurements compared with height ratios, and with unfractured compared with fractured vertebral bodies. The findings of this study indicate that self-directed, tutorial-based training for spine image analyses is effective, resulting in good inter-observer measurement precision. The interactive tutorial-based approach provides standardized training methods and assures consistency of instructional technique over time.

  19. The convergence of robotics, vision, and computer graphics for user interaction

    Energy Technology Data Exchange (ETDEWEB)

    Hollerback, J.M.; Thompson, W.B.; Shirley, P.

    1999-11-01

    Mechanical interfaces to virtual environments and the creation of virtual environments represent important and relatively new application areas for robotics. The creation of immersive interfaces will require codevelopment of visual displays that complement mechanical stimuli with appropriate visual cues, ultimately determined from human psychophysics. Advances in interactive rendering and geometric modeling form computer graphics will play a key role. Examples are drawn from haptic and locomotion interface projects.

  20. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.