WorldWideScience

Sample records for vision machine combining

  1. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  2. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  3. Understanding and applying machine vision

    CERN Document Server

    Zeuch, Nello

    2000-01-01

    A discussion of applications of machine vision technology in the semiconductor, electronic, automotive, wood, food, pharmaceutical, printing, and container industries. It describes systems that enable projects to move forward swiftly and efficiently, and focuses on the nuances of the engineering and system integration of machine vision technology.

  4. Machine Learning for Computer Vision

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2013-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and t...

  5. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  6. Computer vision and machine learning for archaeology

    NARCIS (Netherlands)

    van der Maaten, L.J.P.; Boon, P.; Lange, G.; Paijmans, J.J.; Postma, E.

    2006-01-01

    Until now, computer vision and machine learning techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. The paper illustrates the use of computer vision techniques for

  7. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  8. Machine vision for real time orbital operations

    Science.gov (United States)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  9. Machine vision and appearance based learning

    Science.gov (United States)

    Bernstein, Alexander

    2017-03-01

    Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.

  10. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  11. Development of Moire machine vision

    Science.gov (United States)

    Harding, Kevin G.

    1987-01-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  12. Machine Vision Giving Eyes to Robots. Resources in Technology.

    Science.gov (United States)

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  13. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  14. Machine Learning Techniques in Clinical Vision Sciences.

    Science.gov (United States)

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration

  15. Object recognition combining vision and touch.

    Science.gov (United States)

    Corradi, Tadeo; Hall, Peter; Iravani, Pejman

    2017-01-01

    This paper explores ways of combining vision and touch for the purpose of object recognition. In particular, it focuses on scenarios when there are few tactile training samples (as these are usually costly to obtain) and when vision is artificially impaired. Whilst machine vision is a widely studied field, and machine touch has received some attention recently, the fusion of both modalities remains a relatively unexplored area. It has been suggested that, in the human brain, there exist shared multi-sensorial representations of objects. This provides robustness when one or more senses are absent or unreliable. Modern robotics systems can benefit from multi-sensorial input, in particular in contexts where one or more of the sensors perform poorly. In this paper, a recently proposed tactile recognition model was extended by integrating a simple vision system in three different ways: vector concatenation (vision feature vector and tactile feature vector), object label posterior averaging and object label posterior product. A comparison is drawn in terms of overall accuracy of recognition and in terms of how quickly (number of training samples) learning occurs. The conclusions reached are: (1) the most accurate system is "posterior product", (2) multi-modal recognition has higher accuracy to either modality alone if all visual and tactile training data are pooled together, and (3) in the case of visual impairment, multi-modal recognition "learns faster", i.e. requires fewer training samples to achieve the same accuracy as either other modality.

  16. A method for extracting elbow feature based on machine vision

    Science.gov (United States)

    Wu, Bing; Chen, Yajie; Wang, Jianmi

    2017-04-01

    With the continuous development of computer technology, machine vision has become a hot research topic. And the machine vision is introduced into the field of mechanical processing, the detection of the surface defects of the machined objects. The application of machine vision to industrial production has become more and more popular. The research content of this paper will have broad prospects and development space in the field of machine vision. It has laid a foundation for solving the problems of manual processing, and plays a very important role in the development of universal CNC system with vision.

  17. Machine vision and mechatronics in practice

    CERN Document Server

    Brett, Peter

    2015-01-01

    The contributions for this book have been gathered over several years from conferences held in the series of Mechatronics and Machine Vision in Practice, the latest of which was held in Ankara, Turkey. The essential aspect is that they concern practical applications rather than the derivation of mere theory, though simulations and visualization are important components. The topics range from mining, with its heavy engineering, to the delicate machining of holes in the human skull or robots for surgery on human flesh. Mobile robots continue to be a hot topic, both from the need for navigation and for the task of stabilization of unmanned aerial vehicles. The swinging of a spray rig is damped, while machine vision is used for the control of heating in an asphalt-laying machine.  Manipulators are featured, both for general tasks and in the form of grasping fingers. A robot arm is proposed for adding to the mobility scooter of the elderly. Can EEG signals be a means to control a robot? Can face recognition be ac...

  18. Learning surface molecular structures via machine vision

    Science.gov (United States)

    Ziatdinov, Maxim; Maksov, Artem; Kalinin, Sergei V.

    2017-08-01

    Recent advances in high resolution scanning transmission electron and scanning probe microscopies have allowed researchers to perform measurements of materials structural parameters and functional properties in real space with a picometre precision. In many technologically relevant atomic and/or molecular systems, however, the information of interest is distributed spatially in a non-uniform manner and may have a complex multi-dimensional nature. One of the critical issues, therefore, lies in being able to accurately identify (`read out') all the individual building blocks in different atomic/molecular architectures, as well as more complex patterns that these blocks may form, on a scale of hundreds and thousands of individual atomic/molecular units. Here we employ machine vision to read and recognize complex molecular assemblies on surfaces. Specifically, we combine Markov random field model and convolutional neural networks to classify structural and rotational states of all individual building blocks in molecular assembly on the metallic surface visualized in high-resolution scanning tunneling microscopy measurements. We show how the obtained full decoding of the system allows us to directly construct a pair density function—a centerpiece in analysis of disorder-property relationship paradigm—as well as to analyze spatial correlations between multiple order parameters at the nanoscale, and elucidate reaction pathway involving molecular conformation changes. The method represents a significant shift in our way of analyzing atomic and/or molecular resolved microscopic images and can be applied to variety of other microscopic measurements of structural, electronic, and magnetic orders in different condensed matter systems.

  19. Machine Vision Implementation in Rapid PCB Prototyping

    Directory of Open Access Journals (Sweden)

    Yosafat Surya Murijanto

    2012-03-01

    Full Text Available Image processing, the heart of machine vision, has proven itself to be an essential part of the industries today. Its application has opened new doorways, making more concepts in manufacturing processes viable. This paper presents an application of machine vision in designing a module with the ability to extract drills and route coordinates from an un-mounted or mounted printed circuit board (PCB. The algorithm comprises pre-capturing processes, image segmentation and filtering, edge and contour detection, coordinate extraction, and G-code creation. OpenCV libraries and Qt IDE are the main tools used. Throughout some testing and experiments, it is concluded that the algorithm is able to deliver acceptable results. The drilling and routing coordinate extraction algorithm can extract in average 90% and 82% of the whole drills and routes available on the scanned PCB in a total processing time of less than 3 seconds. This is achievable through proper lighting condition, good PCB surface condition and good webcam quality. 

  20. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  1. Direction Identification System of Garlic Clove Based on Machine Vision

    OpenAIRE

    Gao Chi; Gao Hui

    2013-01-01

    In order to fulfill the requirements of seeding direction of garlic cloves, the paper proposed a research method of garlic clove direction identification based on machine vision, it expounded the theory of garlic clove direction identification, stated the arithmetic of it, designed the direction identification device of it, then developed the control system of garlic clove direction identification based on machine vision, at last tested the garlic clove direction identification, and the resul...

  2. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  3. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  4. Machine Learning, deep learning and optimization in computer vision

    Science.gov (United States)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  5. Trends and developments in industrial machine vision: 2013

    Science.gov (United States)

    Niel, Kurt; Heinzl, Christoph

    2014-03-01

    When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own

  6. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  7. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  8. Machine Vision For Industrial Control:The Unsung Opportunity

    Science.gov (United States)

    Falkman, Gerald A.; Murray, Lawrence A.; Cooper, James E.

    1984-05-01

    Vision modules have primarily been developed to relieve those pressures newly brought into existence by Inspection (QUALITY) and Robotic (PRODUCTIVITY) mandates. Industrial Control pressure stems on the other hand from the older first industrial revolution mandate of throughput. Satisfying such pressure calls for speed in both imaging and decision making. Vision companies have, however, put speed on a backburner or ignore it entirely because most modules are computer/software based which limits their speed potential. Increasingly, the keynote being struck at machine vision seminars is that "Visual and Computational Speed Must Be Increased and Dramatically!" There are modular hardwired-logic systems that are fast but, all too often, they are not very bright. Such units: Measure the fill factor of bottles as they spin by, Read labels on cans, Count stacked plastic cups or Monitor the width of parts streaming past the camera. Many are only a bit more complex than a photodetector. Once in place, most of these units are incapable of simple upgrading to a new task and are Vision's analog to the robot industry's pick and place (RIA TYPE E) robot. Vision thus finds itself amidst the same quandries that once beset the Robot Industry of America when it tried to define a robot, excluded dumb ones, and was left with only slow machines whose unit volume potential is shatteringly low. This paper develops an approach to meeting the need of a vision system that cuts a swath into the terra incognita of intelligent, high-speed vision processing. Main attention is directed to vision for industrial control. Some presently untapped vision application areas that will be serviced include: Electronics, Food, Sports, Pharmaceuticals, Machine Tools and Arc Welding.

  9. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  10. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  11. Design and construction of automatic sorting station with machine vision

    Directory of Open Access Journals (Sweden)

    Oscar D. Velasco-Delgado

    2014-01-01

    Full Text Available This article presents the design, construction and testing of an automatic product sorting system in belt conveyor with machine vision that integrates Free and Open Source Software technology and Allen Bradley commercial equipment. Requirements are defined to determine features such as: mechanics of manufacturing station, an app of product sorting with machine vision and for automation system. For the app of machine vision a library is used for optical digital image processing Open CV, for the mechanical design of the manufacturing station is used the CAD tool Solid Edge and for the design and implementation of automation ISA standards are used along with an automation engineering project methodology integrating a PLC, an inverter, a Panel View and a DeviceNet Network. Performance tests are shown by classifying bottles and PVC pieces in four established types, the behavior of the integrated system is checked so as the efficiency of the same. The processing time on machine vision is 0.290 s on average for a piece of PVC, a capacity of 206 accessories per minute, for bottles was obtained a processing time of 0.267 s, a capacity of 224 bottles per minute. A maximum mechanical performance is obtained with 32 products per minute (1920 products/hour with the conveyor to 22 cm/s and 40 cm of distance between products obtaining an average error of 0.8%.

  12. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  13. CAIP system for vision-based on-machine measurement

    Science.gov (United States)

    Xia, Rui-xue; Lu, Rong-sheng; Shi, Yan-qiong; Li, Qi; Dong, Jing-tao; Liu, Ning

    2011-12-01

    Computer-Aided Inspection Planning (CAIP) is an important module of modern dimensional measuring instruments, utilizing the CAIP for machined parts inspection is an important indication of the level of automation and intelligence. Aiming at the characteristic of visual inspection, it develops a CAIP system for vision-based On-Machine Measurement (OMM) based on a CAD development platform whose kernel is Open CASCADE. The working principle of vision-based OMM system is introduced, and the key technologies of CAIP include inspection information extraction, sampling strategy, inspection path planning, inspection codes generation, inspection procedure verification, data post-processor, comparison, and so on. The entire system was verified on a CNC milling machine, and relevant examples show that the system can accomplish automatic inspection planning task for common parts efficiently.

  14. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    Science.gov (United States)

    Ilyas, Ismet P.

    2013-06-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  15. Research on Manufacturing Technology Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    HU Zhanqi; ZHENG Kuijing

    2006-01-01

    The concept of machine vision based manufacturing technology is proposed first, and the key algorithms used in two-dimensional and three-dimensional machining are discussed in detail. Machining information can be derived from the binary images and gray picture after processing and transforming the picture. Contour and the parallel cutting method about two-dimensional machining are proposed. Polygon approximating algorithm is used to cutting the profile of the workpiece. Fill Scanning algorithm used to machining inner part of a pocket. The improved Shape From Shading method with adaptive pre-processing is adopted to reconstruct the three-dimensional model. Layer cutting method is adopted for three-dimensional machining. The tool path is then gotten from the model, and NC code is formed subsequently. The model can be machined conveniently by the lathe, milling machine or engraver. Some examples are given to demonstrate the results of ImageCAM system, which is developed by the author to implement the algorithms previously mentioned.

  16. Machine vision based quality inspection of flat glass products

    Science.gov (United States)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  17. Two dimensional convolute integers for machine vision and image recognition

    Science.gov (United States)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  18. Machine vision automated visual inspection theory, practice and applications

    CERN Document Server

    Beyerer, Jürgen; Frese, Christian

    2016-01-01

    The book offers a thorough introduction to machine vision. It is organized in two parts. The first part covers the image acquisition, which is the crucial component of most automated visual inspection systems. All important methods are described in great detail and are presented with a reasoned structure. The second part deals with the modeling and processing of image signals and pays particular regard to methods, which are relevant for automated visual inspection.

  19. Software architecture for time-constrained machine vision applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  20. Practical guide to machine vision software an introduction with LabVIEW

    CERN Document Server

    Kwon, Kye-Si

    2014-01-01

    For both students and engineers in R&D, this book explains machine vision in a concise, hands-on way, using the Vision Development Module of the LabView software by National Instruments. Following a short introduction to the basics of machine vision and the technical procedures of image acquisition, the book goes on to guide readers in the use of the various software functions of LabView's machine vision module. It covers typical machine vision tasks, including particle analysis, edge detection, pattern and shape matching, dimension measurements as well as optical character recognition, enabli

  1. Aircraft exterior scratch measurement system using machine vision

    Science.gov (United States)

    Sarr, Dennis P.

    1991-08-01

    In assuring the quality of aircraft skin, it must be free of surface imperfections and structural defects. Manual inspection methods involve mechanical and optical technologies. Machine vision instrumentation can be automated for increasing the inspection rate and repeatability of measurement. As shown by previous industry experience, machine vision instrumentation methods are not calibrated and certified as easily as mechanical devices. The defect must be accurately measured and documented via a printout for engineering evaluation and disposition. In the actual usage of the instrument for inspection, the device must be portable for factory usage, on the flight line, or on an aircraft anywhere in the world. The instrumentation must be inexpensive and operable by a mechanic/technician level of training. The instrument design requirements are extensive, requiring a multidisciplinary approach for the research and development. This paper presents the image analysis results of microscopic structures laser images of scratches on various surfaces. Also discussed are the hardware and algorithms used for the microscopic structures laser images. Dedicated hardware and embedded software for implementing the image acquisition and analysis have been developed. The human interface, human vision is used for determining which image should be processed. Once the image is chosen for analysis, the final answer is a numerical value of the scratch depth. The result is an answer that is reliable and repeatable. The prototype has been built and demonstrated to Boeing Commercial Airplanes Group factory Quality Assurance and flight test management with favorable response.

  2. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  3. A discrepancy within primate spatial vision and its bearing on the definition of edge detection processes in machine vision

    Science.gov (United States)

    Jobson, Daniel J.

    1990-01-01

    The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.

  4. Binocular combination in abnormal binocular vision

    Science.gov (United States)

    Ding, Jian; Klein, Stanley A.; Levi, Dennis M.

    2013-01-01

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  5. Rethinking Robot Vision - Combining Shape and Appearance

    Directory of Open Access Journals (Sweden)

    Matthias J. Schlemmer

    2008-11-01

    Full Text Available Equipping autonomous robots with vision sensors provides a multitude of advantages by simultaneously bringing up difficulties with regard to different illumination conditions. Furthermore, especially with service robots, the objects to be handled must somehow be learned for a later manipulation. In this paper we summarise work on combining two different vision sensors, namely a laser range scanner and a monocular colour camera, for shape-capturing, detecting and tracking of objects in cluttered scenes without the need of intermediate user interaction. The use of different sensor types provides the advantage of separating the shape and the appearance of the object and therefore overcome the problem with changing illumination conditions. We describe the framework and its components of visual shape-capturing, fast 3D object detection and robust tracking as well as examples that show the feasibility of this approach.

  6. Vision-based on-machine measurement for CNC machine tool

    Science.gov (United States)

    Xia, Ruixue; Han, Jiang; Lu, Rongsheng; Xia, Lian

    2015-02-01

    A vision-based on-machine measurement system (OMM) was developed to improve manufacturing effectiveness. It was based on a visual probe to enable the CNC machine tool itself to act as a coordinate measuring machine (CMM) to inspect a workpiece. The proposed OMM system was composed of a visual probe and two software modules: computer-aided inspection planning (CAIP) module and measurement data processing (MDP) module. The auto-focus function of the visual probe was realized by using astigmatic method. The CAIP module was developed based on a CAD development platform with Open CASCADE as its kernel. The MDP module includes some algorithms for determination of inspection parameters, for example, the chamfered hole was measured through focus variation. The entire system was consequently verified on a CNC milling machine.

  7. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  8. Path planning for machine vision assisted, teleoperated pavement crack sealer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.S.; Haas, C.T.; Greer, R. [Univ. of Texas, Austin, TX (United States)

    1998-03-01

    During the last few years, several teleoperated and machine-vision-assisted systems have been developed in construction and maintenance areas such as pavement crack sealing, sewer pipe rehabilitation, excavation, surface finishing, and materials handling. This paper presents a path-planning algorithm used for a machine-vision-assisted automatic pavement crack sealing system. In general, path planning is an important task for optimal motion of a robot whether its environment is structured or unstructured. Manual path planning is not always possible or desirable. A simple greedy path algorithm is utilized for optimal motion of the automated pavement crack sealer. Some unique and broadly applicable computational tools and data structures are required to implement the algorithm in a digital image domain. These components are described, then the performance of the algorithm is compared with the implicit manual path plans of system operators. The comparison is based on computational cost versus overall gains in crack-sealing-process efficiency. Applications of this work in teleoperation, graphical control, and other infrastructure maintenance areas are also suggested.

  9. Diurnal auroral occurrence statistics obtained via machine vision

    Directory of Open Access Journals (Sweden)

    M. T. Syrjäsuo

    2004-04-01

    Full Text Available Modern ground-based digital auroral All-Sky Imager (ASI networks capture millions of images annually. Machine vision techniques are widely utilised in the retrieval of images from large data bases. Clearly, they can play an important scientific role in dealing with data from auroral ASI networks, facilitating both efficient searches and statistical studies. Furthermore, the development of automated techniques for identifying specific types of aurora opens up the potential of ASI control software that would change instrument operation in response to evolving geophysical conditions. In this paper, we describe machine vision techniques that we have developed for use on large auroral image data sets. We present the results of application of these techniques to a 350000 image subset of the CANOPUS Gillam ASI in the years 1993–1998. In particular, we obtain occurrence statistics for auroral arcs, patches, and Omega-bands. These results agree with those of previous manual auroral surveys.

    Key words. Ionosphere (Instruments and techniques General (new fields

  10. Diurnal auroral occurrence statistics obtained via machine vision

    Directory of Open Access Journals (Sweden)

    M. T. Syrjäsuo

    2004-04-01

    Full Text Available Modern ground-based digital auroral All-Sky Imager (ASI networks capture millions of images annually. Machine vision techniques are widely utilised in the retrieval of images from large data bases. Clearly, they can play an important scientific role in dealing with data from auroral ASI networks, facilitating both efficient searches and statistical studies. Furthermore, the development of automated techniques for identifying specific types of aurora opens up the potential of ASI control software that would change instrument operation in response to evolving geophysical conditions. In this paper, we describe machine vision techniques that we have developed for use on large auroral image data sets. We present the results of application of these techniques to a 350000 image subset of the CANOPUS Gillam ASI in the years 1993–1998. In particular, we obtain occurrence statistics for auroral arcs, patches, and Omega-bands. These results agree with those of previous manual auroral surveys.Key words. Ionosphere (Instruments and techniques General (new fields

  11. INFIBRA: machine vision inspection of acrylic fiber production

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  12. Contrast and phase combination in binocular vision.

    Science.gov (United States)

    Huang, Chang-Bing; Zhou, Jiawei; Zhou, Yifeng; Lu, Zhong-Lin

    2010-12-09

    How the visual system combines information from the two eyes to form a unitary binocular representation of the external world is a fundamental question in vision science that has been the focus of many psychophysical and physiological investigations. Ding & Sperling (2006) measured perceived phase of the cyclopean image, and developed a binocular combination model in which each eye exerts gain control on the other eye's signal and over the other eye's gain control. Critically, the relative phase of the monocular sine-waves plays a central role. We used the Ding-Sperling paradigm but measured both the perceived contrast and phase of cyclopean images in three hundred and eighty combinations of base contrast, interocular contrast ratio, eye origin of the probe, and interocular phase difference. We found that the perceived contrast of the cyclopean image was independent of the relative phase of the two monocular gratings, although the perceived phase depended on the relative phase and contrast ratio of the monocular images. We developed a new multi-pathway contrast-gain control model (MCM) that elaborates the Ding-Sperling binocular combination model in two ways: (1) phase and contrast of the cyclopean images are computed in separate pathways, although with shared cross-eye contrast-gain control; and (2) phase-independent local energy from the two monocular images are used in binocular contrast combination. With three free parameters, the model yielded an excellent account of data from all the experimental conditions. Binocular phase combination depends on the relative phase and contrast ratio of the monocular images but binocular contrast combination is phase-invariant. Our findings suggest the involvement of at least two separate pathways in binocular combination.

  13. Characterization of oats (Avena sativa L.) cultivars using machine vision.

    Science.gov (United States)

    Sumathi, S; Balamurugan, P

    2013-10-15

    Machine vision or image analysis is an important tool in the study of morphology of any materials. This technique has been used successfully to differentiate the eleven oats cultivars based on morphological characters. The geometry of seeds was measured through image analyzer and the variation was observed and recorded. From the recorded data, the cluster analysis was carried out and it revealed that the cultivars could be grouped into two main clusters based on similarity in the measured parameters. Cultivar Sabzar, UPO 212, OL 9 and OL 88 formed one main cluster. The another main cluster includes cv. Kent, OS 6, UPO 94, HFO 114, OS 7, HJ 8 and JHO 822 with many sub clusters. Among the cultivars HJ 8 and JHO 822 has more similarity in all measured parameters than other cultivars. Thus morphological characterization through seed image analysis was found useful to discriminate the cultivars.

  14. Design of Gear Defect Detection System Based on Machine Vision

    Science.gov (United States)

    Wang, Yu; Wu, Zhiheng; Duan, Xianyun; Tong, Jigang; Li, Ping; Chen, min; Lin, Qinglin

    2018-01-01

    In order to solve such problems as low efficiency, low quality and instability of gear surface defect detection, we designed a detection system based on machine vision, sensor coupling. By multisensory coupling, and then CCD camera image collection of gear products, using VS2010 to cooperate with Halcon library for a series of analysis and processing of images. At last, the results are fed back to the control end, and the rejected device is removed to the collecting box. The system has successfully identified defective gear. The test results show that this system can identify and eliminate the defects gear quickly and efficiently. It has reached the requirement of gear product defect detection line automation and has a certain application value.

  15. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  16. Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions

    Science.gov (United States)

    Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners

    1995-01-01

    In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...

  17. Machine vision detection of bonemeal in animal feed samples.

    Science.gov (United States)

    Nansen, Christian; Herrman, Timothy; Swanson, Rand

    2010-06-01

    There is growing public concern about contaminants in food and feed products, and reflection-based machine vision systems can be used to develop automated quality control systems. An important risk factor in animal feed products is the presence of prohibited ruminant-derived bonemeal that may contain the BSE (Bovine Spongiform Encephalopathy) prion. Animal feed products are highly complex in composition and texture (i.e., vegetable products, mineral supplements, fish and chicken meal), and current contaminant detection systems rely heavily on labor-intensive microscopy. In this study, we developed a training data set comprising 3.65 million hyperspectral profiles of which 1.15 million were from bonemeal samples, 2.31 million from twelve other feed materials, and 0.19 million denoting light green background (bottom of Petri dishes holding feed materials). Hyperspectral profiles in 150 spectral bands between 419 and 892 nm were analyzed. The classification approach was based on a sequence of linear discriminant analyses (LDA) to gradually improve the classification accuracy of hyperspectral profiles (reduce level of false positives), which had been classified as bonemeal in previous LDAs. That is, all hyperspectral profiles classified as bonemeal in an initial LDA (31% of these were false positives) were used as input data in a second LDA with new discriminant functions. Hyperspectral profiles classified as bonemeal in LDA2 (false positives were equivalent to 16%) were used as input data in a third LDA. This approach was repeated twelve times, in which at each step hyperspectral profiles were eliminated if they were classified as feed material (not bonemeal). Four independent feed materials were experimentally contaminated with 0-25% (by weight) bonemeal and used for validation. The analysis presented here provides support for development of an automated machine vision to detect bonemeal contamination around the 1% (by weight) level and therefore constitutes an

  18. Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique

    OpenAIRE

    Ghazali, Kamarul Hawari; Mustafa, Mohd. Marzuki; Hussain, Aini

    2009-01-01

    Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the f...

  19. Machine-vision-based identification of broken inserts in edge profile milling heads

    NARCIS (Netherlands)

    Fernandez Robles, Laura; Azzopardi, George; Alegre, Enrique; Petkov, Nicolai

    This paper presents a reliable machine vision system to automatically detect inserts and determine if they are broken. Unlike the machining operations studied in the literature, we are dealing with edge milling head tools for aggressive machining of thick plates (up to 12 centimetres) in a single

  20. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    Science.gov (United States)

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  1. Machine vision: recent advances in CCD video camera technology

    Science.gov (United States)

    Easton, Richard A.; Hamilton, Ronald J.

    1997-09-01

    This paper describes four state-of-the-art digital video cameras, which provide advanced features that benefit computer image enhancement, manipulation, and analysis. These cameras were designed to reduce the complexity of imaging systems while increasing the accuracy, dynamic range, and detail enhancement of product inspections. Two cameras utilize progressive scan CCD sensors enabling the capture of high- resolution image of moving objects without the need for strobe lights or mechanical shutters. The second progressive scan camera has an unusually high resolution of 1280 by 1024 and a choice of serial or parallel digital interface for data and control. The other two cameras incorporate digital signal processing (DSP) technology for improved dynamic range, more accurate determination of color, white balance stability, and enhanced contrast of part features against the background. Successful applications and future product development trends are discussed. A brief description of analog and digital image capture devices will address the most common questions regarding interface requirements within a typical machine vision system overview.

  2. Detection of eviscerated poultry spleen enlargement by machine vision

    Science.gov (United States)

    Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren

    1999-01-01

    The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.

  3. Machine vision for automated inspection of railway traffic recordings

    Science.gov (United States)

    Machy, Caroline; Desurmont, Xavier; Mancas-Thillou, Céline; Carincotte, Cyril; Delcourt, Vincent

    2009-02-01

    For the 9000 train accidents reported each year in the European Union [1], the Recording Strip (RS) and Filling-Card (FC) related to the train activities represent the only usable evidence for SNCF (the French railway operator) and most of National authorities. More precisely, the RS contains information about the train journey, speed and related Driving Events (DE) such as emergency brakes, while the FC gives details on the departure/arrival stations. In this context, a complete checking for 100% of the RS was recently voted by French law enforcement authorities (instead of the 5% currently performed), which raised the question of an automated and efficient inspection of this huge amount of recordings. To do so, we propose a machine vision prototype, constituted with cassettes receiving RS and FC to be digitized. Then, a video analysis module firstly determines the type of RS among eight possible types; time/speed curves are secondly extracted to estimate the covered distance, speed and stops, while associated DE are finally detected using convolution process. A detailed evaluation on 15 RS (8000 kilometers and 7000 DE) shows very good results (100% of good detections for the type of band, only 0.28% of non detections for the DE). An exhaustive evaluation on a panel of about 100 RS constitutes the perspectives of the work.

  4. Prolog-based prototyping software for machine vision

    Science.gov (United States)

    Batchelor, Bruce G.; Hack, Ralf; Jones, Andrew C.

    1996-10-01

    Prolog image processing (PIP) is a multi-media prototyping tool, intended to assist designers of intelligent industrial machine vision systems. This is the latest in a series of prolog-based systems that have been implemented at Cardiff, specifically for this purpose. The software package provides fully integrated facilities for both interactive and programmed image processing, 'smart' documentation, guidance about which lighting/viewing set-up to use, speech/natural language input and speech output. It can also be used to control a range of electro-mechanical devices, such as lamps, cameras, lenses, pneumatic positioning mechanisms, robots, etc., via a low-cost hardware interfacing module. The software runs on a standard computer, with no predecessors in that the image processing is carried out entirely in software. This article concentrates on the design and implementation of the PIP system, and presents programs for two demonstration applications: (a) recognizing a non-picture playing card; (b) recognizing a well laid table place setting.

  5. Intelligent Machine Vision System for Automated Quality Control in Ceramic Tiles Industry

    OpenAIRE

    KESER, Tomislav; HOCENSKI, Željko; HOCENSKI, Verica

    2010-01-01

    Intelligent system for automated visual quality control of ceramic tiles based on machine vision is presented in this paper. The ceramic tiles production process is almost fully and well automated in almost all production stages with exception of quality control stage at the end. The ceramic tiles quality is checked by using visual quality control principles where main goal is to successfully replace man as part of production chain with an automated machine vision system to ...

  6. Potato Size and Shape Detection Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Liao Guiping

    2015-01-01

    Full Text Available To reduce the error and faster classification by mechanizing in classifying the potato shape and size through machine vision using the extraction of characters procedure to identify the size, and using the shape detection procedure to identify the shape. Test results in potato size detection revealed 40/191 = 0.210mm/pixel as length scale or calibration factor (40/M where 40 is the table tennis ball size (40mm and 191 as image pixels table tennis (M; measurement results revealed that between the algorithm results and the manual measurements, the absolute error was <3mm, while the relative error rate was <4%; and the measurement results based on the ellipse axis length can accurately calculate the actual long axis and short axis of potato. Potato shape detection revealed the analysis of 228 images composed of 114 positive and 114 negatives side, only 2 have been incorrectly classified, mainly because the Extracted ratio (R of the potato image of those two positive and negative images are near 0.67, respectively 0.671887, 0.661063, 0.667604, and 0.67193. The comparison to establish a calibration system method using both basic rectangle and ellipse R ratio methods to detect the potato size and shape, revealed that the basic rectangle method has better effect in the case of fixed place. Moreover, the ellipse axis method was observed to be more stable with an error rate of 7%. Therefore it is recommended that the ellipse axis method should be used to detect the shape of potato for differentiation into round, long cylindrical, and oval shapes, with the accuracy level of 98.8%.

  7. Design of apochromatic lens with large field and high definition for machine vision.

    Science.gov (United States)

    Yang, Ao; Gao, Xingyu; Li, Mingfeng

    2016-08-01

    Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60  mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320  cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.

  8. Combining human and machine processes (CHAMP)

    Science.gov (United States)

    Sudit, Moises; Sudit, David; Hirsch, Michael

    2015-05-01

    Machine Reasoning and Intelligence is usually done in a vacuum, without consultation of the ultimate decision-maker. The late consideration of the human cognitive process causes some major problems in the use of automated systems to provide reliable and actionable information that users can trust and depend to make the best Course-of-Action (COA). On the other hand, if automated systems are created exclusively based on human cognition, then there is a danger of developing systems that don't push the barrier of technology and are mainly done for the comfort level of selected subject matter experts (SMEs). Our approach to combining human and machine processes (CHAMP) is based on the notion of developing optimal strategies for where, when, how, and which human intelligence should be injected within a machine reasoning and intelligence process. This combination is based on the criteria of improving the quality of the output of the automated process while maintaining the required computational efficiency for a COA to be actuated in timely fashion. This research addresses the following problem areas: • Providing consistency within a mission: Injection of human reasoning and intelligence within the reliability and temporal needs of a mission to attain situational awareness, impact assessment, and COA development. • Supporting the incorporation of data that is uncertain, incomplete, imprecise and contradictory (UIIC): Development of mathematical models to suggest the insertion of a cognitive process within a machine reasoning and intelligent system so as to minimize UIIC concerns. • Developing systems that include humans in the loop whose performance can be analyzed and understood to provide feedback to the sensors.

  9. Chemometric studies on zNose™ and machine vision technologies for discrimination of commercial extra virgin olive oils

    OpenAIRE

    Kadiroǧlu, Pınar; KOREL, Figen

    2015-01-01

    The aim of this study was to classify Turkish commercial extra virgin olive oil (EVOO) samples according to geographical origins by using surface acoustic wave sensing electronic nose (zNose™) and machine vision system (MVS) analyses in combination with chemometric approaches. EVOO samples obtained from north and south Aegean region were used in the study. The data analyses were performed with principal component analysis class models, partial least squares-discriminant analysis (PLS-DA) and ...

  10. Experimental Machine Vision System for Training Students in Virtual Instrumentation Techniques

    Directory of Open Access Journals (Sweden)

    Rodica Holonec

    2011-10-01

    Full Text Available The aim of this paper is to present the main techniques in designing and building of a complex machine vision system in order to train electrical engineering students in using virtual instrumentation. The proposed test bench realizes an automatic adjustment of some electrical circuit parameters on a belt conveyer. The students can learn how to combine mechanics, electronics, electrical engineering, image acquisition and processing in order to solve the proposed application. After the system implementation the students are asked to present in which way they can modify or extend the system for industrial environment regarding the automatic adjustment of electric parameters or the calibration of different type of sensors (of distance, of proximity, etc without the intervention of the human factor in the process.

  11. Machine Vision Automation for Ground Control Tele-Robotics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project seeks to advance ground based tele-robotic capabilities with the development of natural feature target tracking technology with the use of machine...

  12. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  13. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  14. Computer vision and machine learning with RGB-D sensors

    CERN Document Server

    Shao, Ling; Kohli, Pushmeet

    2014-01-01

    This book presents an interdisciplinary selection of cutting-edge research on RGB-D based computer vision. Features: discusses the calibration of color and depth cameras, the reduction of noise on depth maps and methods for capturing human performance in 3D; reviews a selection of applications which use RGB-D information to reconstruct human figures, evaluate energy consumption and obtain accurate action classification; presents an approach for 3D object retrieval and for the reconstruction of gas flow from multiple Kinect cameras; describes an RGB-D computer vision system designed to assist t

  15. Toward The Robot Eye: Isomorphic Representation For Machine Vision

    Science.gov (United States)

    Schenker, Paul S.

    1981-10-01

    This paper surveys some issues confronting the conception of models for general purpose vision systems. We draw parallels to requirements of human performance under visual transformations naturally occurring in the ecological environment. We argue that successful real world vision systems require a strong component of analogical reasoning. We propose a course of investigation into appropriate models, and illustrate some of these proposals by a simple example. Our study emphasizes the potential importance of isomorphic representations - models of image and scene which embed a metric of their respective spaces, and whose topological structure facilitates identification of scene descriptors that are invariant under viewing transformations.

  16. Measuring Leaf Motion of Tomato by Machine Vision

    NARCIS (Netherlands)

    Henten, van E.J.; Marx, G.E.H.; Hofstee, J.W.; Hemming, J.; Sarlikioti, V.

    2012-01-01

    For a better understanding of growth and development of tomato plants in three dimensional space, tomato plants were monitored using a computer vision system. It is commonly known that leaves of tomato plants do not have a fixed position and orientation during the day; they move in response to

  17. On-Line Estimation of Laser-Drilled Hole Depth Using a Machine Vision Method

    Directory of Open Access Journals (Sweden)

    Te-Ying Liao

    2012-07-01

    Full Text Available The paper presents a novel method for monitoring and estimating the depth of a laser-drilled hole using machine vision. Through on-line image acquisition and analysis in laser machining processes, we could simultaneously obtain correlations between the machining processes and analyzed images. Based on the machine vision method, the depths of laser-machined holes could be estimated in real time. Therefore, a low cost on-line inspection system is developed to increase productivity. All of the processing work was performed in air under standard atmospheric conditions and gas assist was used. A correlation between the cumulative size of the laser-induced plasma region and the depth of the hole is presented. The result indicates that the estimated depths of the laser-drilled holes were a linear function of the cumulative plasma size, with a high degree of confidence. This research provides a novel machine vision-based method for estimating the depths of laser-drilled holes in real time.

  18. The systematic development of a machine vision based milking robot

    NARCIS (Netherlands)

    Gouws, J.

    1993-01-01

    Agriculture involves unique interactions between man, machines, and various elements from nature. Therefore the implementation of advanced technology in agriculture holds different challenges than in other sectors of the economy. This dissertation stems from research into the application of

  19. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  20. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  1. Gall mite inspection on dormant black currant buds using machine vision

    DEFF Research Database (Denmark)

    Nielsen, M. R.; Stigaard Laursen, Morten; Jonassen, M. S.

    2013-01-01

    This paper presents a novel machine vision-based approach detecting and mapping gall mite infection in dormant buds on black currant bushes. A vehicle was fitted with four cameras and RTK-GPS. Results compared automatic detection to human decisions based on the images, and by mapping the results ...

  2. Scratch measurement system using machine vision: part II

    Science.gov (United States)

    Sarr, Dennis P.

    1992-03-01

    Aircraft skins and windows must not have scratches, which are unacceptable for cosmetic and structural reasons. Manual methods are inadequate in giving accurate reading and do not provide a hardcopy report. A prototype scratch measurement system (SMS) using computer vision and image analysis has been developed. This paper discusses the prototype description, novel ideas, improvements, repeatability, reproducibility, accuracy, and the calibration method. Boeing's Calibration Certification Laboratory has given the prototype a qualified certification. The SMS is portable for usage in factory or aircraft hangars anywhere in the world.

  3. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-11-22

    ... modify a final initial determination (``ID'') of the presiding administrative law judge (``ALJ''). The..., California; Techno Soft Systemnics, Inc. (``Techno Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd. of... Soft based on partial withdrawal of the complaint. On April 20, 2010, the Commission issued notice of...

  4. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-09-30

    ... determined to review-in-part a final initial determination (``ID'') of the presiding administrative law judge...''); Amistar Automation, Inc. (``Amistar'') of San Marcos, California; Techno Soft Systemnics, Inc. (``Techno Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd. of Japan and Fuji America Corporation of Vernon...

  5. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review

    Directory of Open Access Journals (Sweden)

    Luis Pérez

    2016-03-01

    Full Text Available In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  6. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.

    Science.gov (United States)

    Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F

    2016-03-05

    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.

  7. APLIKASI SISTEM MONITORING PERTUMBUHAN TANAMAN BERBASIS WEB MENGGUNAKAN MACHINE VISION Application of Web-based Monitoring System for Plant Growing by Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Lilik Sutiarso

    2012-05-01

    Full Text Available Nowadays, demand for integrating between information technology (IT and development of agricultural system isin order to increase the productivity, efficiency and profitability in term of precision agriculture. This matter occurred due to some problems in the field, such as; unintensively monitoring activities for plant during the growing period. One of the alternative solutions to overcome the problem was introducing the machine vision technology in the farming system. The research is actually as a basic research that aims using technology of digital image processing and software of computation (mathematics to support a function of real-time monitoring system for plant growing. The research mechanism was started from digital image processing by using an image segmentation method that can identify between the main object (plant and others (soil, weed. Image processing algorithm used excess color method and color normalization to identify plants, to calculate crop area. Otsu method was used to convert it to binary images. The next was to calculate and analyze a percentage of the plant growing, from after planting until harvesting time. The analyzed data were stored as MySQL database format in the web server. Final output of the research was the web based monitoring instruments for plant growing that can be accessed through intranet (local area network as well as internet technology. From the software testing, monitoring with a machine vision system has a success rate reached 70 % for identifying plants. ABSTRAK Tuntutan integrasi teknologi sistem informasi dan sistem pertanian saat ini dimaksudkan guna mendukung efisiensi,produktivitas dan profitabiltas pertanian. Hal tersebut didorong oleh timbulnya permasalahan di lapangan terkait dengan belum optimalnya produktivitas tanaman yang diakibatkan antara lain, kurang intensifnya pemantauan (monitoring tanaman pada masa pertumbuhan. Salah satu alternatif solusi untuk memperbaiki permasalahan tersebut

  8. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC

    Directory of Open Access Journals (Sweden)

    Zhangwei Chen

    2013-03-01

    Full Text Available This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users’ configuration data. The Sum of Absolute Differences (SAD algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  9. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  10. Express quality control of chicken eggs by machine vision

    Science.gov (United States)

    Gorbunova, Elena V.; Chertov, Aleksandr N.; Peretyagin, Vladimir S.; Korotaev, Valery V.; Arbuzova, Evgeniia A.

    2017-06-01

    The urgency of the task of analyzing the foodstuffs quality is determined by the strategy for the formation of a healthy lifestyle and the rational nutrition of the world population. This applies to products, such as chicken eggs. In particular, it is necessary to control the chicken eggs quality at the farm production prior to incubation in order to eliminate the possible hereditary diseases, as well as high embryonic mortality and a sharp decrease in the quality of the bred young. Up to this day, in the market there are no objective instruments of contactless express quality control as analytical equipment that allow the high-precision quality examination of the chicken eggs, which is determined by the color parameters of the eggshell (color uniformity) and yolk of eggs, and by the presence in the eggshell of various defects (cracks, growths, wrinkles, dirty). All mentioned features are usually evaluated only visually (subjectively) with the help of normalized color standards and ovoscopes. Therefore, this work is devoted to the investigation of the application opportunities of contactless express control method with the help of technical vision to implement the chicken eggs' quality analysis. As a result of the studies, a prototype with the appropriate software was proposed. Experimental studies of this equipment on a representative sample of eggs from chickens of different breeds have been carried out (the total number of analyzed samples exceeds 300 pieces). The correctness of the color analysis was verified by spectrophotometric studies of the surface of the eggshell.

  11. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  12. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  13. Solder inspection using an object-oriented approach to machine vision

    Science.gov (United States)

    Bowskill, Jerry M.; Katz, T.; Downie, J. H.

    1995-03-01

    A generic approach to the development and integration of machine vision, within surface- mount electronics manufacturing, has been proceeding based on the concept of a standard vision framework. A framework is a collection of system components, the connection of which can be configured with appropriate support tools. This is facilitated using object- oriented analysis and design techniques to identify and describe those elements, or modules, that are crucial to all vision systems within the domain. Analysis of surface-mount manufacturing has identified fifteen potential tasks in which machine vision inspection and control is beneficial. The essential functionality which spans these tasks has been identified and incorporated in a set of approximately twenty visual components implemented using the KAPPA programming environment. A practical exploration has been made into using the framework to develop a method of classifying insufficient solder deposits based on the distinct light reflection characteristics of solder fillets when illuminated from different angles. Classification has been reliably achieved by calculating the variation in mean luminance of specific fillet regions between images obtained with high and low angles of lighting using a custom light source. The resulting system architecture has illustrated the potential of object- oriented software and specification techniques, producing an elegant structure based on code reuse and `design by extension'.

  14. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  15. Rethinking Robot VisionCombining Shape and Appearance

    Directory of Open Access Journals (Sweden)

    Matthias J. Schlemmer

    2007-09-01

    Full Text Available Equipping autonomous robots with vision sensors provides a multitude of advantages by simultaneously bringing up difficulties with regard to different illumination conditions. Furthermore, especially with service robots, the objects to be handled must somehow be learned for a later manipulation. In this paper we summarise work on combining two different vision sensors, namely a laser range scanner and a monocular colour camera, for shape-capturing, detecting and tracking of objects in cluttered scenes without the need of intermediate user interaction. The use of different sensor types provides the advantage of separating the shape and the appearance of the object and therefore overcome the problem with changing illumination conditions. We describe the framework and its components of visual shape-capturing, fast 3D object detection and robust tracking as well as examples that show the feasibility of this approach.

  16. Compensation strategy for machining optical freeform surfaces by the combined on- and off-machine measurement.

    Science.gov (United States)

    Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou

    2015-09-21

    Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.

  17. A New Approach to Spindle Radial Error Evaluation Using a Machine Vision System

    Directory of Open Access Journals (Sweden)

    Kavitha C.

    2017-03-01

    Full Text Available The spindle rotational accuracy is one of the important issues in a machine tool which affects the surface topography and dimensional accuracy of a workpiece. This paper presents a machine-vision-based approach to radial error measurement of a lathe spindle using a CMOS camera and a PC-based image processing system. In the present work, a precisely machined cylindrical master is mounted on the spindle as a datum surface and variations of its position are captured using the camera for evaluating runout of the spindle. The Circular Hough Transform (CHT is used to detect variations of the centre position of the master cylinder during spindle rotation at subpixel level from a sequence of images. Radial error values of the spindle are evaluated using the Fourier series analysis of the centre position of the master cylinder calculated with the least squares curve fitting technique. The experiments have been carried out on a lathe at different operating speeds and the spindle radial error estimation results are presented. The proposed method provides a simpler approach to on-machine estimation of the spindle radial error in machine tools.

  18. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  19. A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

    Directory of Open Access Journals (Sweden)

    Yingju Chen

    2012-01-01

    Full Text Available Wireless capsule endoscopy (WCE enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans, tomography-based images (e.g., MRT and CT scans, and photography-based images (e.g., endoscopy, dermatology, and microscopic histology. Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI pathology and methods of shot boundary detection.

  20. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    Science.gov (United States)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  1. Tunable machine vision-based strategy for automated annotation of chemical databases.

    Science.gov (United States)

    Park, Jungkap; Rosania, Gus R; Saitou, Kazuhiro

    2009-08-01

    We present a tunable, machine vision-based strategy for automated annotation of virtual small molecule databases. The proposed strategy is based on the use of a machine vision-based tool for extracting structure diagrams in research articles and converting them into connection tables, a virtual "Chemical Expert" system for screening the converted structures based on the adjustable levels of estimated conversion accuracy, and a fragment-based measure for calculating intermolecular similarity. For annotation, calculated chemical similarity between the converted structures and entries in a virtual small molecule database is used to establish the links. The overall annotation performances can be tuned by adjusting the cutoff threshold of the estimated conversion accuracy. We perform an annotation test which attempts to link 121 journal articles registered in PubMed to entries in PubChem which is the largest, publicly accessible chemical database. Two cases of tests are performed, and their results are compared to see how the overall annotation performances are affected by the different threshold levels of the estimated accuracy of the converted structure. Our work demonstrates that over 45% of the articles could have true positive links to entries in the PubChem database with promising recall and precision rates in both tests. Furthermore, we illustrate that the Chemical Expert system which can screen converted structures based on the adjustable levels of estimated conversion accuracy is a key factor impacting the overall annotation performance. We propose that this machine vision-based strategy can be incorporated with the text-mining approach to facilitate extraction of contextual scientific knowledge about a chemical structure, from the scientific literature.

  2. The Intangible Assets Advantages in the Machine Vision Inspection of Thermoplastic Materials

    Science.gov (United States)

    Muntean, Diana; Răulea, Andreea Simina

    2017-12-01

    Innovation is not a simple concept but is the main source of success. It is more important to have the right people and mindsets in place than to have a perfectly crafted plan in order to make the most out of an idea or business. The aim of this paper is to emphasize the importance of intangible assets when it comes to machine vision inspection of thermoplastic materials pointing out some aspects related to knowledge based assets and their need for a success idea to be developed in a successful product.

  3. Machine Vision based Micro-crack Inspection in Thin-film Solar Cell Panel

    Directory of Open Access Journals (Sweden)

    Zhang Yinong

    2014-09-01

    Full Text Available Thin film solar cell consists of various layers so the surface of solar cell shows heterogeneous textures. Because of this property the visual inspection of micro-crack is very difficult. In this paper, we propose the machine vision-based micro-crack detection scheme for thin film solar cell panel. In the proposed method, the crack edge detection is based on the application of diagonal-kernel and cross-kernel in parallel. Experimental results show that the proposed method has better performance of micro-crack detection than conventional anisotropic model based methods on a cross- kernel.

  4. Development of the Triple Theta assembly station with machine vision feedback

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Derek William [Los Alamos National Laboratory

    2008-01-01

    Increased requirements for tighter tolerances on assembled target components in complex three-dimensional geometries with only days to assemble complete campaigns require the implementation of a computer-controlled high-precision assembly station. Over the last year, an 11-axis computer-controlled assembly station has been designed and built with custom software to handle the multiple coordinate systems and automatically calculate all relational positions. Preliminary development efforts have also been done to explore the benefit of a machine vision feedback module with a dual-camera viewing system to automate certain basic features like crosshair calibration, component leveling, and component centering.

  5. A study of electrodischarge machining–pulse electrochemical machining combined machining for holes with high surface quality on superalloy

    Directory of Open Access Journals (Sweden)

    Ning Ma

    2015-11-01

    Full Text Available Noncircular holes on the surface of turbine rotor blades are usually machined by electrodischarge machining. A recast layer containing numerous micropores and microcracks is easily generated during the electrodischarge machining process due to the rapid heating and cooling effects, which restrict the wide applications of noncircular holes in aerospace and aircraft industries. Owing to the outstanding advantages of pulse electrochemical machining, electrodischarge machining–pulse electrochemical machining combined technique is provided to improve the overall quality of electrodischarge machining-drilled holes. The influence of pulse electrochemical machining processing parameters on the surface roughness and the influence of the electrodischarge machining–pulse electrochemical machining method on the surface quality and accuracy of holes have been studied experimentally. The results indicate that the pulse electrochemical machining processing time for complete removal of the recast layer decreases with the increase in the pulse electrochemical machining current. The low pulse electrochemical machining current results in uneven dissolution of the recast layer, while the higher pulse electrochemical machining current induces relatively homogeneous dissolution. The surface roughness is reduced from 4.277 to 0.299 µm, and the hole taper induced by top-down electrodischarge machining process was reduced from 1.04° to 0.17° after pulse electrochemical machining. On account of the advantages of electrodischarge machining and the pulse electrochemical machining, the electrodischarge machining–pulse electrochemical machining combined technique could be applied for machining noncircular holes with high shape accuracy and surface quality.

  6. Applications of color machine vision in the agricultural and food industries

    Science.gov (United States)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  7. Broiler weight estimation based on machine vision and artificial neural network.

    Science.gov (United States)

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  8. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  9. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants

    Directory of Open Access Journals (Sweden)

    Pedro J. Navarro

    2016-05-01

    Full Text Available Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN, Naive Bayes Classifier (NBC, and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  10. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  11. Fast and intuitive programming of adaptive laser cutting of lace enabled by machine vision

    Science.gov (United States)

    Vaamonde, Iago; Souto-López, Álvaro; García-Díaz, Antón

    2015-07-01

    A machine vision system has been developed, validated, and integrated in a commercial laser robot cell. It permits an offline graphical programming of laser cutting of lace. The user interface allows loading CAD designs and aligning them with images of lace pieces. Different thread widths are discriminated to generate proper cutting program templates. During online operation, the system aligns CAD models of pieces and lace images, pre-checks quality of lace cuts and adapts laser parameters to thread widths. For pieces detected with the required quality, the program template is adjusted by transforming the coordinates of every trajectory point. A low-cost lace feeding system was also developed for demonstration of full process automation.

  12. Potential application of machine vision technology to saffron (Crocus sativus L.) quality characterization.

    Science.gov (United States)

    Kiani, Sajad; Minaei, Saeid

    2016-12-01

    Saffron quality characterization is an important issue in the food industry and of interest to the consumers. This paper proposes an expert system based on the application of machine vision technology for characterization of saffron and shows how it can be employed in practical usage. There is a correlation between saffron color and its geographic location of production and some chemical attributes which could be properly used for characterization of saffron quality and freshness. This may be accomplished by employing image processing techniques coupled with multivariate data analysis for quantification of saffron properties. Expert algorithms can be made available for prediction of saffron characteristics such as color as well as for product classification. Copyright © 2016. Published by Elsevier Ltd.

  13. Nondestructive Detection of the Internalquality of Apple Using X-Ray and Machine Vision

    Science.gov (United States)

    Yang, Fuzeng; Yang, Liangliang; Yang, Qing; Kang, Likui

    The internal quality of apple is impossible to be detected by eyes in the procedure of sorting, which could reduce the apple’s quality reaching market. This paper illustrates an instrument using X-ray and machine vision. The following steps were introduced to process the X-ray image in order to determine the mould core apple. Firstly, lifting wavelet transform was used to get a low frequency image and three high frequency images. Secondly, we enhanced the low frequency image through image’s histogram equalization. Then, the edge of each apple's image was detected using canny operator. Finally, a threshold was set to clarify mould core and normal apple according to the different length of the apple core’s diameter. The experimental results show that this method could on-line detect the mould core apple with less time consuming, less than 0.03 seconds per apple, and the accuracy could reach 92%.

  14. Study of Lighting Solutions in Machine Vision Applications for Automated Assembly Operations

    Science.gov (United States)

    Zorcolo, Alberto; Escobar-Palafox, Gustavo; Gault, Rosemary; Scott, Robin; Ridgway, Keith

    2011-12-01

    The application of machine vision techniques represents an invaluable aid in many fields of manufacturing, from part inspection to metrology, robot guidance and assembly operations in general. An effective illumination of the working area constitutes a crucial aspect for optimising the performance of such techniques but unfortunately ideal light conditions are rarely available, especially if the vision system has to work within small areas, possibly close to metallic surfaces with high reflectivity. This work aims to investigate which factors mostly affect the accuracy in a typical feature recognition and measurement application. A first screening of a set of six factors was carried out by testing three different light sources, according to a two-level fractional factorial design of experiments (DOE), a Pareto analysis was performed in order to establish which parameters were the most significant. Once the key factors were identified, a second series of the experiments were carried out on a single light source, in order to optimise the key parameters and to provide useful guidelines on how to minimise measurement errors in different scenarios.

  15. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    Energy Technology Data Exchange (ETDEWEB)

    Brunton, Steven [Univ. of Washington, Seattle, WA (United States)

    2018-01-15

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robust principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.

  16. A transient search using combined human and machine classifications

    Science.gov (United States)

    Wright, Darryl E.; Lintott, Chris J.; Smartt, Stephen J.; Smith, Ken W.; Fortson, Lucy; Trouille, Laura; Allen, Campbell R.; Beck, Melanie; Bouslog, Mark C.; Boyer, Amy; Chambers, K. C.; Flewelling, Heather; Granger, Will; Magnier, Eugene A.; McMaster, Adam; Miller, Grant R. M.; O'Donnell, James E.; Simmons, Brooke; Spiers, Helen; Tonry, John L.; Veldthuis, Marten; Wainscoat, Richard J.; Waters, Chris; Willman, Mark; Wolfenbarger, Zach; Young, Dave R.

    2017-12-01

    Large modern surveys require efficient review of data in order to find transient sources such as supernovae, and to distinguish such sources from artefacts and noise. Much effort has been put into the development of automatic algorithms, but surveys still rely on human review of targets. This paper presents an integrated system for the identification of supernovae in data from Pan-STARRS1, combining classifications from volunteers participating in a citizen science project with those from a convolutional neural network. The unique aspect of this work is the deployment, in combination, of both human and machine classifications for near real-time discovery in an astronomical project. We show that the combination of the two methods outperforms either one used individually. This result has important implications for the future development of transient searches, especially in the era of Large Synoptic Survey Telescope and other large-throughput surveys.

  17. Tomato grading system using machine vision technology and neuro-fuzzy networks (ANFIS

    Directory of Open Access Journals (Sweden)

    H Izadi

    2016-04-01

    Full Text Available Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects and in recent century, Machine Vision technology has been used for shape sorting. The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert. Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health. The grading and sorting mechanism consisted of a lighting chamber (cloudy sky, lighting source and a digital camera connected to a computer. The images were recorded in a special chamber with an indirect radiation (cloudy sky with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens. Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1. For the first group; characteristics of the images were analysis that could offer information an surface area (S.A., maximum diameter (Dmax, minimum diameter (Dmin and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato; 1. Classified as red (red > 90% 2. Classified as red light (red or bold pink 60-90% 3. Classified as pink (red 30-60% 4. Classified as Turning

  18. 11th Annual Intelligent Ground Vehicle Competition: team approaches to intelligent driving and machine vision

    Science.gov (United States)

    Theisen, Bernard L.; Lane, Gerald R.

    2003-10-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990's. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both the U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligtent driving capabilities. Over the past 11 years, the competition has challenged both undergraduates and graduates, including Ph.D. students with real world applications in intelligent transportation systems, the military, and manufacturing automation. To date, teams from over 40 universities and colleges have participated. In this paper, we describe some of the applications of the technologies required by this competition, and discuss the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.

  19. Intelligent Machine Vision for Automated Fence Intruder Detection Using Self-organizing Map

    Directory of Open Access Journals (Sweden)

    Veldin A. Talorete Jr.

    2017-03-01

    Full Text Available This paper presents an intelligent machine vision for automated fence intruder detection. A series of still captured images that contain fence events using Internet Protocol cameras was used as input data to the system. Two classifiers were used; the first is to classify human posture and the second one will classify intruder location. The system classifiers were implemented using Self-Organizing Map after the implementation of several image segmentation processes. The human posture classifier is in charge of classifying the detected subject’s posture patterns from subject’s silhouette. Moreover, the Intruder Localization Classifier is in charge of classifying the detected pattern’s location classifier will estimate the location of the intruder with respect to the fence using geometric feature from images as inputs. The system is capable of activating the alarm, display the actual image and depict the location of the intruder when an intruder is detected. In detecting intruder posture, the system’s success rate of 88%. Overall system accuracy for day-time intruder localization is 83% and an accuracy of 88% for night-time intruder localization

  20. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  1. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    Science.gov (United States)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  2. Yield Estimation of Sugar Beet Based on Plant Canopy Using Machine Vision Methods

    Directory of Open Access Journals (Sweden)

    S Latifaltojar

    2014-09-01

    Full Text Available Crop yield estimation is one of the most important parameters for information and resources management in precision agriculture. This information is employed for optimizing the field inputs for successive cultivations. In the present study, the feasibility of sugar beet yield estimation by means of machine vision was studied. For the field experiments stripped images were taken during the growth season with one month intervals. The image of horizontal view of plants canopy was prepared at the end of each month. At the end of growth season, beet roots were harvested and the correlation between the sugar beet canopy in each month of growth period and corresponding weight of the roots were investigated. Results showed that there was a strong correlation between the beet yield and green surface area of autumn cultivated sugar beets. The highest coefficient of determination was 0.85 at three months before harvest. In order to assess the accuracy of the final model, the second year of study was performed with the same methodology. The results depicted a strong relationship between the actual and estimated beet weights with R2=0.94. The model estimated beet yield with about 9 percent relative error. It is concluded that this method has appropriate potential for estimation of sugar beet yield based on band imaging prior to harvest

  3. Multisource Data Fusion Framework for Land Use/Land Cover Classification Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Salman Qadri

    2017-01-01

    Full Text Available Data fusion is a powerful tool for the merging of multiple sources of information to produce a better output as compared to individual source. This study describes the data fusion of five land use/cover types, that is, bare land, fertile cultivated land, desert rangeland, green pasture, and Sutlej basin river land derived from remote sensing. A novel framework for multispectral and texture feature based data fusion is designed to identify the land use/land cover data types correctly. Multispectral data is obtained using a multispectral radiometer, while digital camera is used for image dataset. It has been observed that each image contained 229 texture features, while 30 optimized texture features data for each image has been obtained by joining together three features selection techniques, that is, Fisher, Probability of Error plus Average Correlation, and Mutual Information. This 30-optimized-texture-feature dataset is merged with five-spectral-feature dataset to build the fused dataset. A comparison is performed among texture, multispectral, and fused dataset using machine vision classifiers. It has been observed that fused dataset outperformed individually both datasets. The overall accuracy acquired using multilayer perceptron for texture data, multispectral data, and fused data was 96.67%, 97.60%, and 99.60%, respectively.

  4. A New High-Speed Foreign Fiber Detection System with Machine Vision

    Directory of Open Access Journals (Sweden)

    Zhiguo Chen

    2010-01-01

    Full Text Available A new high-speed foreign fiber detection system with machine vision is proposed for removing foreign fibers from raw cotton using optimal hardware components and appropriate algorithms designing. Starting from a specialized lens of 3-charged couple device (CCD camera, the system applied digital signal processor (DSP and field-programmable gate array (FPGA on image acquisition and processing illuminated by ultraviolet light, so as to identify transparent objects such as polyethylene and polypropylene fabric from cotton tuft flow by virtue of the fluorescent effect, until all foreign fibers that have been blown away safely by compressed air quality can be achieved. An image segmentation algorithm based on fast wavelet transform is proposed to identify block-like foreign fibers, and an improved canny detector is also developed to segment wire-like foreign fibers from raw cotton. The procedure naturally provides color image segmentation method with region growing algorithm for better adaptability. Experiments on a variety of images show that the proposed algorithms can effectively segment foreign fibers from test images under various circumstances.

  5. Number Determination of Successfully Packaged Dies Per Wafer Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Hsuan-Ting Chang

    2015-04-01

    Full Text Available Packaging the integrated circuit (IC chip is a necessary step in the manufacturing process of IC products. In general, wafers with the same size and process should have a fixed number of packaged dies. However, many factors decrease the number of the actually packaged dies, such as die scratching, die contamination, and die breakage, which are not considered in the existing die-counting methods. Here we propose a robust method that can automatically determine the number of actual packaged dies by using machine vision techniques. During the inspection, the image is taken from the top of the wafer, in which most dies have been removed and packaged. There are five steps in the proposed method: wafer region detection, wafer position calibration, dies region detection, detection of die sawing lines, and die number counting. The abnormal cases of fractional dies in the wafer boundary and dropped dies during the packaging are considered in the proposed method as well. The experimental results show that the precision and recall rates reach 99.83% and 99.84%, respectively, when determining the numbers of actual packaged dies in the 41 test cases.

  6. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation

    Science.gov (United States)

    O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-01-01

    Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was

  7. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  8. Of Genes and Machines: Application of a Combination of Machine Learning Tools to Astronomy Data Sets

    Science.gov (United States)

    Heinis, S.; Kumar, S.; Gezari, S.; Burgett, W. S.; Chambers, K. C.; Draper, P. W.; Flewelling, H.; Kaiser, N.; Magnier, E. A.; Metcalfe, N.; Waters, C.

    2016-04-01

    We apply a combination of genetic algorithm (GA) and support vector machine (SVM) machine learning algorithms to solve two important problems faced by the astronomical community: star-galaxy separation and photometric redshift estimation of galaxies in survey catalogs. We use the GA to select the relevant features in the first step, followed by optimization of SVM parameters in the second step to obtain an optimal set of parameters to classify or regress, in the process of which we avoid overfitting. We apply our method to star-galaxy separation in Pan-STARRS1 data. We show that our method correctly classifies 98% of objects down to {i}{{P1}}=24.5, with a completeness (or true positive rate) of 99% for galaxies and 88% for stars. By combining colors with morphology, our star-galaxy separation method yields better results than the new SExtractor classifier spread_model, in particular at the faint end ({i}{{P1}}\\gt 22). We also use our method to derive photometric redshifts for galaxies in the COSMOS bright multiwavelength data set down to an error in (1+z) of σ =0.013, which compares well with estimates from spectral energy distribution fitting on the same data (σ =0.007) while making a significantly smaller number of assumptions.

  9. Color machine vision system for process control in the ceramics industry

    Science.gov (United States)

    Penaranda Marques, Jose A.; Briones, Leoncio; Florez, Julian

    1997-08-01

    This paper is focused on the design of a machine vision system to solve a problem found in the manufacturing process of high quality polished porcelain tiles. This consists of sorting the tiles according to the criteria 'same appearance to the human eye' or in other words, by color and visual texture. In 1994 this problem was tackled and led to a prototype which became fully operational at production scale in a manufacturing plant, named Porcelanatto, S.A. The system has evolved and has been adapted to meet the particular needs of this manufacturing company. Among the main issues that have been improved, it is worth pointing out: (1) improvement to discern subtle variations in color or texture, which are the main features of the visual appearance; (2) inspection time reduction, as a result of algorithm optimization and the increasing computing power. Thus, 100 percent of the production can be inspected, reaching a maximum of 120 tiles/sec.; (3) adaptation to the different types and models of tiles manufactured. The tiles vary not only in their visible patterns but also in dimensions, formats, thickness and allowances. In this sense, one major problem has been reaching an optimal compromise: The system must be sensitive enough to discern subtle variations in color, but at the same time insensitive thickness variations in the tiles. The following parts have been used to build the system: RGB color line scan camera, 12 bits per channel, PCI frame grabber, PC, fiber optic based illumination and the algorithm which will be explained in section 4.

  10. Development and evaluation of a targeted orchard sprayer using machine vision technology

    Directory of Open Access Journals (Sweden)

    H Asaei

    2016-09-01

    Full Text Available Introduction In conventional methods of spraying in orchards, the amount of pesticide sprayed, is not targeted. The pesticide consumption data indicates that the application rate of pesticide in greenhouses and orchards is more than required. Less than 30% of pesticide sprayed actually reaches nursery canopies while the rest are lost and wasted. Nowadays, variable rate spray applicators using intelligent control systems can greatly reduce pesticide use and off-target contamination of environment in nurseries and orchards. In this research a prototype orchard sprayer based on machine vision technology was developed and evaluated. This sprayer performs real-time spraying based on the tree canopy structure and its greenness extent which improves the efficiency of spraying operation in orchards. Materials and Methods The equipment used in this study comprised of three main parts generally: 1- Mechanical Equipment 2- Data collection and image processing system 3- Electronic control system Two booms were designed to support the spray nozzles and to provide flexibility in directing the spray nozzles to the target. The boom comprised two parts, the vertical part and inclined part. The vertical part of the boom was used to spray one side of the trees during forward movement of the tractor and inclined part of the boom was designed to spray the upper half of the tree canopy. Three nozzles were considered on each boom. On the vertical part of the boom, two nozzles were placed, whereas one other nozzle was mounted on the inclined part of the boom. To achieve different tree heights, the vertical part of the boom was able to slide up and down. Labview (version 2011 was used for real time image processing. Images were captured through RGB cameras mounted on a horizontal bar attached on top of the tractor to take images separately for each side of the sprayer. Images were captured from the top of the canopies looking downward. The triggering signal for

  11. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    National Research Council Canada - National Science Library

    Yuxiang Yang; Xiang Meng; Mingyu Gao

    2017-01-01

    In order to optimize the three-dimensional (3D) reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper...

  12. Combining Formal Logic and Machine Learning for Sentiment Analysis

    DEFF Research Database (Denmark)

    Petersen, Niklas Christoffer; Villadsen, Jørgen

    2014-01-01

    This paper presents a formal logical method for deep structural analysis of the syntactical properties of texts using machine learning techniques for efficient syntactical tagging. To evaluate the method it is used for entity level sentiment analysis as an alternative to pure machine learning...

  13. A combined object-tracking algorithm for omni-directional vision-based AGV navigation

    Science.gov (United States)

    Yuan, Wei; Sun, Jie; Cao, Zuo-Liang; Tian, Jing; Yang, Ming

    2010-03-01

    A combined object-tracking algorithm that realizes the realtime tracking of the selected object through the omni-directional vision with a fisheye lens is presented. The new method combines the modified continuously adaptive mean shift algorithm with the Kalman filter method. With the proposed method, the object-tracking problem when the object reappears after being sheltered completely or moving out of the field of view is solved. The experimental results perform well, and the algorithm proposed here improves the robustness and accuracy of the tracking in the omni-directional vision.

  14. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    Science.gov (United States)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  15. A Machine Vision System For Real Time Inspection Of Moving Web

    Science.gov (United States)

    Guay, Jean-Louis C.

    1990-03-01

    A machine vision system for real time inspection of moving web is discussed. The system is for use in detecting, locating, identifying, classifying and sizing defects on a moving sheet of web and, mapping the defects detected on the roll of web being inspected on a printer. On the latter, the precise location on the X and Y axes on the roll of web for each defect detected is recorded using a code whereby the type of defect is identified and it's size reported. The system's unique configuration of hardware components includes a novel Time Delay and Integration (TDI) type camera as the sensors, a novel image processing board directly coupled to the TDI camera forming a camera-image processing board subassembly and, a novel master timing and synchronization board subsystem. The latter being capable of synchronizing the triggering, timing and sampling time of multiple channels within the TDI sensor and multiple TDI camera-image processing board subassemblies, in parallel within the same time frame, by controlling a novel master clocking scheme within a TDI logic circuit, contained within each camera, wherein the clocks' speed for data output, in Megahertz (MHZ), remains constant regardless of web speed whereas, the imager clocks' speed and signal format varies with web speed. This allows a high level of stop motion at any web speed including high speeds, that is, motion within the defined pixel size on the web during sampling time being less than 0.00002" (0.02 mils) and, renders the system immune to process vibrations. Furthermore, through routines in firmware, the system's performance and abilities are not affected by web speed variations. The system is integrated on a 20 slots AT2 compatible backplane having a PC bus single board AT2 compatible computer system wherein is contained various routines to generate data by processing various types of defects on various types of webs which are downloaded on to each of the image processing boards' programmable chips at

  16. Combined Machine Learning Techniques for Decision Making Support in Medicine

    OpenAIRE

    Stoean, Ruxandra

    2016-01-01

    Computational intelligent support for decision making is becoming increasingly popular and essential among medical professionals. Also, with the modern medical devices being capable to communicate with ICT, created models can easily find practical translation into software. Machine learning solutions for medicine range from the robust but opaque paradigms of support vector machines and neural networks to the also performant, yet more comprehensible, decision trees and rule-based models. So ho...

  17. Application brushless machines with combine excitation for a hybrid car and an electric car

    Directory of Open Access Journals (Sweden)

    Gandzha S.A.

    2015-08-01

    Full Text Available This article shows advantages of application the brushless machines with combined excitation (excitation from permanent magnets and excitation winding for the hybrid car and the electric car. This type of electric machine is compared with a typical brushless motor and an induction motor. The main advantage is the decrease of the dimensions of electric machine and the reduction of the price for an electronic control system. It is shown the design and the principle of operation of the electric machine. The machine was modeled using Solidworks program for creating design and Maxwell program for the magnetic field analysis. The result of tests is shown as well.

  18. Can early malignant melanoma be differentiated from atypical melanocytic nevus by in vivo techniques?: Part II. Automatic machine vision classification.

    Science.gov (United States)

    Gutkowicz-Krusin, D; Elbaum, M; Szwaykowski, P; Kopf, A W

    1997-02-01

    Differentiation between early (Breslow thickness less than 1 mm) malignant melanoma (MM) and atypical melanocytic nevus (AMN) remains a challenge even to trained clinicians. The purpose of this study is to determine the feasibility of reliable discrimination between early MM and AMN with noninvasive, objective, automatic machine vision techniques. A data base of 104 digitized dermoscopic color transparencies of melanocytic lesions was used to develop and test our computer-based algorithms for classification of such lesions as malignant (MM) or benign (AMN). Histopathologic diagnoses (30 MM and 74 AMN) were used as the "gold standard" for training and testing the algorithms. A fully automatic, objective technique for differentiating between early MM and AMN from their dermoscopic digital images was developed. The multiparameter linear classifier was trained to provide 100% sensitivity for MM. In the blind test, this technique did not miss a single MM and its specificity was comparable to that of skilled dermatologists. Reliable differentiation between early MM and AMN with high sensitivity is possible using machine vision techniques to analyze digitized dermoscopic lesion images.

  19. Needs and Challenges of Seniors with Combined Hearing and Vision Loss

    Science.gov (United States)

    McDonnall, Michele C.; Crudden, Adele; LeJeune, B. J.; Steverson, Anne; O'Donnell, Nancy

    2016-01-01

    Introduction: The purpose of this study was to identify the needs and challenges of seniors with dual sensory loss (combined hearing and vision loss) and to determine priorities for training family members, community service providers, and professionals who work with them. Methods: Individuals (N = 131) with dual sensory loss between the ages of…

  20. Vision System of Mobile Robot Combining Binocular and Depth Cameras

    Directory of Open Access Journals (Sweden)

    Yuxiang Yang

    2017-01-01

    Full Text Available In order to optimize the three-dimensional (3D reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot. Because of structural constraints, the resolution of TOF depth camera is very low, which difficultly meets the requirement of trajectory planning. The resolution of binocular stereo cameras can be very high, but the effect of stereo matching is not ideal for low-texture scenes. Hence binocular stereo cameras also difficultly meet the requirements of high accuracy. In this paper, the proposed system integrates depth camera and stereo matching to improve the precision of the 3D reconstruction. Moreover, a double threads processing method is applied to improve the efficiency of the system. The experimental results show that the system can effectively improve the accuracy of 3D reconstruction, identify the distance from the camera accurately, and achieve the strategy of trajectory planning.

  1. Real-time performance of a hands-free semi-autonomous wheelchair system using a combination of stereoscopic and spherical vision.

    Science.gov (United States)

    Nguyen, Jordan S; Nguyen, Tuan Nghia; Tran, Yvonne; Su, Steven W; Craig, Ashley; Nguyen, Hung T

    2012-01-01

    This paper is concerned with the operational performance of a semi-autonomous wheelchair system named TIM (Thought-controlled Intelligent Machine), which uses cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. The unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. Combining this vision system with a shared control strategy provides intelligent assistive guidance during wheelchair navigation, and can accompany any hands-free wheelchair control technology for people with severe physical disability. Testing of this system in crowded dynamic environments has displayed the feasibility and real-time performance of this system when assisting hands-free control technologies, in this case being a proof-of-concept brain-computer interface (BCI).

  2. Investigation into the use of smartphone as a machine vision device for engineering metrology and flaw detection, with focus on drilling

    Science.gov (United States)

    Razdan, Vikram; Bateman, Richard

    2015-05-01

    This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision

  3. Increased generalization capability of trainable COSFIRE filters with application to machine vision

    NARCIS (Netherlands)

    Azzopardi, George; Fernandez-Robles, Laura; Alegre, Enrique; Petkov, Nicolai

    2017-01-01

    The recently proposed trainable COSFIRE filters are highly effective in a wide range of computer vision applications, including object recognition, image classification, contour detection and retinal vessel segmentation. A COSFIRE filter is selective for a collection of contour parts in a certain

  4. Measuring the modulation-transfer function of radiation-tolerant machine-vision system using the sum of harmonic components of different frequency

    Science.gov (United States)

    Perezyabov, Oleg A.; Maltseva, Nadezhda K.; Ilinski, Aleksandr V.

    2017-05-01

    There are a number of robotic systems that are used for nuclear power plant maintenance and it is important to ensure the necessary safety level. The machine-vision systems are applied for this purpose. There are special requirements for the image quality of these systems. To estimate the resolution of a video-system one should determine the impact of the system on the special test pattern. In this paper we describe the procedure of determining the number of the modulation transfer function values of the radiation-tolerant machine-vision systems using the test pattern, containing the sum of the harmonic functions of different frequency.

  5. Application brushless machines with combine excitation for a hybrid car and an electric car

    OpenAIRE

    Gandzha S.A.; Kiessh I.E.

    2015-01-01

    This article shows advantages of application the brushless machines with combined excitation (excitation from permanent magnets and excitation winding) for the hybrid car and the electric car. This type of electric machine is compared with a typical brushless motor and an induction motor. The main advantage is the decrease of the dimensions of electric machine and the reduction of the price for an electronic control system. It is shown the design and the principle of operation of the electric...

  6. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method

    Directory of Open Access Journals (Sweden)

    A Bakhshipour Ziaratgahi

    2017-05-01

    Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four

  7. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    Science.gov (United States)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  8. Tracking objects with fixed-wing UAV using model predictive control and machine vision

    OpenAIRE

    Skjong, Espen; Nundal, Stian Aas

    2014-01-01

    This thesis describes the development of an object tracking system for unmanned aerial vehicles (UAVs), intended to be used for search and rescue (SAR) missions. The UAV is equipped with a two-axis gimbal system, which houses an infrared (IR) camera used to detect and track objects of interest, and a lower level autopilot. An external computer vision (CV) module is assumed implemented and connected to the object tracking system, providing object positions and velocities to the control system....

  9. The in-situ 3D measurement system combined with CNC machine tools

    Science.gov (United States)

    Zhao, Huijie; Jiang, Hongzhi; Li, Xudong; Sui, Shaochun; Tang, Limin; Liang, Xiaoyue; Diao, Xiaochun; Dai, Jiliang

    2013-06-01

    With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.

  10. Machine vision process monitoring on a poultry processing kill line: results from an implementation

    Science.gov (United States)

    Usher, Colin; Britton, Dougl; Daley, Wayne; Stewart, John

    2005-11-01

    Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards. In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds. In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and

  11. Combining satellite imagery and machine learning to predict poverty.

    Science.gov (United States)

    Jean, Neal; Burke, Marshall; Xie, Michael; Davis, W Matthew; Lobell, David B; Ermon, Stefano

    2016-08-19

    Reliable data on economic livelihoods remain scarce in the developing world, hampering efforts to study these outcomes and to design policies that improve them. Here we demonstrate an accurate, inexpensive, and scalable method for estimating consumption expenditure and asset wealth from high-resolution satellite imagery. Using survey and satellite data from five African countries--Nigeria, Tanzania, Uganda, Malawi, and Rwanda--we show how a convolutional neural network can be trained to identify image features that can explain up to 75% of the variation in local-level economic outcomes. Our method, which requires only publicly available data, could transform efforts to track and target poverty in developing countries. It also demonstrates how powerful machine learning techniques can be applied in a setting with limited training data, suggesting broad potential application across many scientific domains. Copyright © 2016, American Association for the Advancement of Science.

  12. Theory research of seam recognition and welding torch pose control based on machine vision

    Science.gov (United States)

    Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang

    2017-03-01

    At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.

  13. Design Considerations for Scalable High-Performance Vision Systems Embedded in Industrial Print Inspection Machines

    Directory of Open Access Journals (Sweden)

    Rössler Peter

    2007-01-01

    Full Text Available This paper describes the design of a scalable high-performance vision system which is used in the application area of optical print inspection. The system is able to process hundreds of megabytes of image data per second coming from several high-speed/high-resolution cameras. Due to performance requirements, some functionality has been implemented on dedicated hardware based on a field programmable gate array (FPGA, which is coupled to a high-end digital signal processor (DSP. The paper discusses design considerations like partitioning of image processing algorithms between hardware and software. The main chapters focus on functionality implemented on the FPGA, including low-level image processing algorithms (flat-field correction, image pyramid generation, neighborhood operations and advanced processing units (programmable arithmetic unit, geometry unit. Verification issues for the complex system are also addressed. The paper concludes with a summary of the FPGA resource usage and some performance results.

  14. Development of self-adjusting hydraulic machine for combination forming of upsetting and extruding

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In the paper a self-adjusting hydraulic machine for combination forming of upsetting and extruding is systematacially presented in terms of mechanical principle, design principle, machine construction, design of the key components and working routine. The machine is designed with the following features: The lower movable beam is adjusted by the ejecting cylinder, the upper upsetting beam is reset by the backstroke slide rods, and the upsetting cylinders communicate with the gas-liquid accumulators. These features make the machine conformation compact, save both the backstroke cylinder of the upper upsetting beam and the upsetting cylinder of the lower movable beam, and simplify the hydraulic system. Furthermore, the machine can resolve such problems as incomplete filling at the addendum position, microcracks at the dedendum position, greater force and lower die life during precision forging of spur gears.

  15. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    Science.gov (United States)

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  16. An Architecture for Hybrid Manufacturing Combining 3D Printing and CNC Machining

    OpenAIRE

    Marcel Müller; Elmar Wings

    2016-01-01

    Additive manufacturing is one of the key technologies of the 21st century. Additive manufacturing processes are often combined with subtractive manufacturing processes to create hybrid manufacturing because it is useful for manufacturing complex parts, for example, 3D printed sensor systems. Currently, several CNC machines are required for hybrid manufacturing: one machine is required for additive manufacturing and one is required for subtractive manufacturing. Disadvantages of conventional h...

  17. Computing the Solutions of the Combined Korteweg-de Vries Equation by Turing Machines

    Directory of Open Access Journals (Sweden)

    Dianchen Lu

    2010-06-01

    Full Text Available In this paper, we study the computability of the initial value problem of the Combined KdV equation. It is shown that, for any integer s>2, the nonlinear solution operator which maps an initial condition data to the solution of the Combined KdV equation can be computed by a Turing machine.

  18. Combined Heat and Power: A Decade of Progress, A Vision for the Future

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2009-08-01

    Over the past 10 years, DOE has built a solid foundation for a robust CHP marketplace. We have aligned with key partners to produce innovative technologies and spearhead market-transforming projects. Our commercialization activities and Clean Energy Regional Application Centers have expanded CHP across the nation. More must be done to tap CHP’s full potential. Read more about DOE’s CHP Program in “Combined Heat and Power: A Decade of Progress, A Vision for the Future.”

  19. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    Science.gov (United States)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  20. Automated analysis of retinal imaging using machine learning techniques for computer vision.

    Science.gov (United States)

    De Fauw, Jeffrey; Keane, Pearse; Tomasev, Nenad; Visentin, Daniel; van den Driessche, George; Johnson, Mike; Hughes, Cian O; Chu, Carlton; Ledsam, Joseph; Back, Trevor; Peto, Tunde; Rees, Geraint; Montgomery, Hugh; Raine, Rosalind; Ronneberger, Olaf; Cornebise, Julien

    2016-01-01

    There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular ("wet") age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the 'back' of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  1. An Architecture for Hybrid Manufacturing Combining 3D Printing and CNC Machining

    Directory of Open Access Journals (Sweden)

    Marcel Müller

    2016-01-01

    Full Text Available Additive manufacturing is one of the key technologies of the 21st century. Additive manufacturing processes are often combined with subtractive manufacturing processes to create hybrid manufacturing because it is useful for manufacturing complex parts, for example, 3D printed sensor systems. Currently, several CNC machines are required for hybrid manufacturing: one machine is required for additive manufacturing and one is required for subtractive manufacturing. Disadvantages of conventional hybrid manufacturing methods are presented. Hybrid manufacturing with one CNC machine offers many advantages. It enables manufacturing of parts with higher accuracy, less production time, and lower costs. Using the example of fused layer modeling (FLM, we present a general approach for the integration of additive manufacturing processes into a numerical control for machine tools. The resulting CNC architecture is presented and its functionality is demonstrated. Its application is beyond the scope of this paper.

  2. A Modified Method Combined with a Support Vector Machine and Bayesian Algorithms in Biological Information

    Directory of Open Access Journals (Sweden)

    Wen-Gang Zhou

    2015-06-01

    Full Text Available With the deep research of genomics and proteomics, the number of new protein sequences has expanded rapidly. With the obvious shortcomings of high cost and low efficiency of the traditional experimental method, the calculation method for protein localization prediction has attracted a lot of attention due to its convenience and low cost. In the machine learning techniques, neural network and support vector machine (SVM are often used as learning tools. Due to its complete theoretical framework, SVM has been widely applied. In this paper, we make an improvement on the existing machine learning algorithm of the support vector machine algorithm, and a new improved algorithm has been developed, combined with Bayesian algorithms. The proposed algorithm can improve calculation efficiency, and defects of the original algorithm are eliminated. According to the verification, the method has proved to be valid. At the same time, it can reduce calculation time and improve prediction efficiency.

  3. Estimation of a fluorescent lamp spectral distribution for color image in machine vision

    OpenAIRE

    Corzo, Luis Galo; Penaranda, Jose Antonio; Peer, Peter

    2014-01-01

    We present a technique to quickly estimate the Illumination Spectral Distribution (ISD) in an image illuminated by a fluorescent lamp. It is assumed that the object colors are a set of colors for which spectral reflectances are available (in our experiments we use spectral measurements of 12 colors checker chart), the sensitivities of the camera sensors are known and the camera response is linear. Thus, the ISD can be approximated by a finite linear combinations of a small number of basis fun...

  4. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    Science.gov (United States)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  5. Children with developmental coordination disorder benefit from using vision in combination with touch information for quiet standing.

    Science.gov (United States)

    Bair, Woei-Nan; Barela, José A; Whitall, Jill; Jeka, John J; Clark, Jane E

    2011-06-01

    In two experiments, the ability to use multisensory information (haptic information, provided by lightly touching a stationary surface, and vision) for quiet standing was examined in typically developing (TD) children, adults, and in seven-year-old children with Developmental Coordination Disorder (DCD). Four sensory conditions (no touch/no vision, with touch/no vision, no touch/with vision, and with touch/with vision) were employed. In experiment 1, we tested four-, six- and eight-year-old TD children and adults to provide a developmental landscape for performance on this task. In experiment 2, we tested a group of seven-year-old children with DCD and their age-matched TD peers. For all groups, touch robustly attenuated standing sway suggesting that children as young as four years old use touch information similarly to adults. Touch was less effective in children with DCD compared to their TD peers, especially in attenuating their sway velocity. Children with DCD, unlike their TD peers, also benefited from using vision to reduce sway. The present results suggest that children with DCD benefit from using vision in combination with touch information for standing control possibly due to their less well developed internal models of body orientation and self-motion. Internal model deficits, combined with other known deficits such as postural muscles activation timing deficits, may exacerbate the balance impairment in children with DCD. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  7. Fast and flexible 3D object recognition solutions for machine vision applications

    Science.gov (United States)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  8. Chain code technique for the classification and orientation of simple objects in a machine vision task

    Science.gov (United States)

    Kerr, David; Wakenshaw, Jonathan T.

    1999-08-01

    Many routine pick-and-place tasks require a combinational software analysis approach in that a particular object must first be recognized before orienting a robot gripper or other tool to pick it up. The first step requires the segmentation of pattern feature from the image in order to make the classification. The second step concerns the determination of the position and orientation of the classified object. We present an approach to this two-stage problem that utilizes only the Freeman chain code of the object outline, rather than the image itself. We show that, given the chain code, it is possible to segment a number of specific geometrical pattern features that can be used to identify the object. From the same code, it is further demonstrated that the object location can be specified by computing its center of mass and minor axis of inertia. It is thus possible to identify and locate entities within an image given only their chain codes. The algorithms are demonstrated on a variety of simple shapes. The method is at present restricted to solid shapes, but could be extended to include objects of greater complexity.

  9. Automatically designed machine vision system for the localization of CCA transverse section in ultrasound images.

    Science.gov (United States)

    Benes, Radek; Karasek, Jan; Burget, Radim; Riha, Kamil

    2013-01-01

    The common carotid artery (CCA) is a source of important information that doctors can use to evaluate the patients' health. The most often measured parameters are arterial stiffness, lumen diameter, wall thickness, and other parameters where variation with time is usually measured. Unfortunately, the manual measurement of dynamic parameters of the CCA is time consuming, and therefore, for practical reasons, the only alternative is automatic approach. The initial localization of artery is important and must precede the main measurement. This article describes a novel method for the localization of CCA in the transverse section of a B-mode ultrasound image. The novel method was designed automatically by using the grammar-guided genetic programming (GGGP). The GGGP searches for the best possible combination of simple image processing tasks (independent building blocks). The best possible solution is represented with the highest detection precision. The method is tested on a validation database of CCA images that was specially created for this purpose and released for use by other scientists. The resulting success of the proposed solution was 82.7%, which exceeded the current state of the art by 4% while the computation time requirements were acceptable. The paper also describes an automatic method that was used in designing the proposed solution. This automatic method provides a universal approach to designing complex solutions with the support of evolutionary algorithms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Social collective intelligence: combining the powers of humans and machines to build a smarter society

    NARCIS (Netherlands)

    Miorandi, Daniele; Maltese, Vincenzo; Rovatsos, Michael; Nijholt, Antinus; Stewart, James

    2014-01-01

    The book focuses on Social Collective Intelligence, a term used to denote a class of socio-technical systems that combine, in a coordinated way, the strengths of humans, machines and collectives in terms of competences, knowledge and problem solving capabilities with the communication, computing and

  11. COMBINED TOOL WITH POSSIBILITIES FOR ONE-TIME MACHINING OF OPENING OF HYDRAULIC CYLINDERS

    Directory of Open Access Journals (Sweden)

    Pavel Petrov

    2016-12-01

    Full Text Available The present article is suggested new design of the cutting part of the combine tool for treatment the holes of hydraulic cylinders in which the cutting inserts of the movable two-blade block are axially displaced. As a result of this removed allowance of machining increases and provides high accuracy and productivity.

  12. Optimum cereal combine harvester operation by means of automatic machine and threshing speed control

    NARCIS (Netherlands)

    Huisman, W.

    1983-01-01

    The method by which automation of agricultural machinery can be developed is illustrated in the case of cereal combine harvesting. The controlled variables are machine forward speed and threshing cylinder peripheral speed. Four control systems have been developed that optimise these speeds on the

  13. A Robust Machine Vision Algorithm Development for Quality Parameters Extraction of Circular Biscuits and Cookies Digital Images

    Directory of Open Access Journals (Sweden)

    Satyam Srivastava

    2014-01-01

    Full Text Available Biscuits and cookies are one of the major parts of Indian bakery products. The bake level of biscuits and cookies is of significant value to various bakery products as it determines the taste, texture, number of chocolate chips, uniformity in distribution of chocolate chips, and various features related to appearance of products. Six threshold methods (isodata, Otsu, minimum error, moment preserving, Fuzzy, manual method, and k-mean clustering have been implemented for chocolate chips extraction from captured cookie image. Various other image processing operations such as entropy calculation, area calculation, parameter calculation, baked dough color, solidity, and fraction of top surface area have been implemented for commercial KrackJack biscuits and cookies. Proposed algorithm is able to detect and investigate about various defects such as crack and various spots. A simple and low cost machine vision system with improved version of robust algorithm for quality detection and identification is envisaged. Developed system and robust algorithm have a great application in various biscuit and cookies baking companies. Proposed system is composed of a monochromatic light source, and USB based 10.0 megapixel camera interfaced with ARM-9 processor for image acquisition. MATLAB version 5.2 has been used for development of robust algorithms and testing for various captured frames. Developed methods and procedures were tested on commercial biscuits resulting in the specificity and sensitivity of more than 94% and 82%, respectively. Since developed software package has been tested on commercial biscuits, it can be programmed to inspect other manufactured bakery products.

  14. Robot Physical Interaction through the combination of Vision, Tactile and Force Feedback Applications to Assistive Robotics

    CERN Document Server

    Prats, Mario; Sanz, Pedro J

    2013-01-01

    Robot manipulation is a great challenge; it encompasses versatility -adaptation to different situations-, autonomy -independent robot operation-, and dependability -for success under modeling or sensing errors. A complete manipulation task involves, first, a suitable grasp or contact configuration, and the subsequent motion required by the task. This monograph presents a unified framework by introducing task-related aspects into the knowledge-based grasp concept, leading to task-oriented grasps. Similarly, grasp-related issues are also considered during the execution of a task, leading to grasp-oriented tasks which is called framework for physical interaction (FPI). The book presents the theoretical framework for the versatile specification of physical interaction tasks, as well as the problem of autonomous planning of these tasks. A further focus is on sensor-based dependable execution combining three different types of sensors: force, vision and tactile. The FPI approach allows to perform a wide range of ro...

  15. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification

    Directory of Open Access Journals (Sweden)

    Friehs Karl

    2008-10-01

    Full Text Available Abstract Background Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. Results This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. Conclusion The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature

  16. A study of electrodischarge machining–pulse electrochemical machining combined machining for holes with high surface quality on superalloy

    OpenAIRE

    Ning Ma; Xiaolong Yang; Mingqian Gao; Jinlong Song; Ganlin Liu; Wenji Xu

    2015-01-01

    Noncircular holes on the surface of turbine rotor blades are usually machined by electrodischarge machining. A recast layer containing numerous micropores and microcracks is easily generated during the electrodischarge machining process due to the rapid heating and cooling effects, which restrict the wide applications of noncircular holes in aerospace and aircraft industries. Owing to the outstanding advantages of pulse electrochemical machining, electrodischarge machining–pulse electrochemic...

  17. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    Science.gov (United States)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples

  18. Adaptive Machine Vision

    Science.gov (United States)

    1989-01-25

    program MathCAD on an IBM PCAT. As nearly as possible, the same notation is used in the discussion and the MathCAD calculation for each ROC. Note on...include the case of S’ positive counts and (S-S’) negative counts. MathCAD does not allow sums over an index S satisfying a condition such as SIT...However, we can compute the same sum2 with MathCAD if we use a fixed range for S from 1 to m -m, and set to zero all terms that violate the condition S)T

  19. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash.

    Science.gov (United States)

    Pelletier, Mathew G

    2008-02-08

    One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the

  20. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash

    Directory of Open Access Journals (Sweden)

    Mathew G. Pelletier

    2008-02-01

    Full Text Available One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU as an alternative to thePC’s traditional use of the central processing unit (CPU. The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit “GPU”, for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC’s central processing

  1. Twitter Sentiment Analysis: Lexicon Method, Machine Learning Method and Their Combination

    OpenAIRE

    Kolchyna, Olga; Souza, Tharsis T. P.; Treleaven, Philip; Aste, Tomaso

    2015-01-01

    This paper covers the two approaches for sentiment analysis: i) lexicon based method; ii) machine learning method. We describe several techniques to implement these approaches and discuss how they can be adopted for sentiment classification of Twitter messages. We present a comparative study of different lexicon combinations and show that enhancing sentiment lexicons with emoticons, abbreviations and social-media slang expressions increases the accuracy of lexicon-based classification for Twi...

  2. SAINT: A combined simulation language for modeling man-machine systems

    Science.gov (United States)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  3. A study of electrodischarge machining–pulse electrochemical machining combined machining for holes with high surface quality on superalloy

    National Research Council Canada - National Science Library

    Ma, Ning; Yang, Xiaolong; Gao, Mingqian; Song, Jinlong; Liu, Ganlin; Xu, Wenji

    2015-01-01

    .... A recast layer containing numerous micropores and microcracks is easily generated during the electrodischarge machining process due to the rapid heating and cooling effects, which restrict the wide...

  4. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    OpenAIRE

    Wang Xufeng; Kong Xingwei; Zhi Jianhui; Chen Yong; Dong Xinmin

    2015-01-01

    Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental conditions, without using a set of infrared l...

  5. A Novel Machine Learning Based Method of Combined Dynamic Environment Prediction

    Directory of Open Access Journals (Sweden)

    Wentao Mao

    2013-01-01

    Full Text Available In practical engineerings, structures are often excited by different kinds of loads at the same time. How to effectively analyze and simulate this kind of dynamic environment of structure, named combined dynamic environment, is one of the key issues. In this paper, a novel prediction method of combined dynamic environment is proposed from the perspective of data analysis. First, the existence of dynamic similarity between vibration responses of the same structure under different boundary conditions is theoretically proven. It is further proven that this similarity can be established by a multiple-input multiple-output regression model. Second, two machine learning algorithms, multiple-dimensional support vector machine and extreme learning machine, are introduced to establish this model. To test the effectiveness of this method, shock and stochastic white noise excitations are acted on a cylindrical shell with two clamps to simulate different dynamic environments. The prediction errors on various measuring points are all less than ±3 dB, which shows that the proposed method can predict the structural vibration response under one boundary condition by means of the response under another condition in terms of precision and numerical stability.

  6. Gradual Reduction in Sodium Content in Cooked Ham, with Corresponding Change in Sensorial Properties Measured by Sensory Evaluation and a Multimodal Machine Vision System.

    Directory of Open Access Journals (Sweden)

    Kirsti Greiff

    Full Text Available The European diet today generally contains too much sodium (Na(+. A partial substitution of NaCl by KCl has shown to be a promising method for reducing sodium content. The aim of this work was to investigate the sensorial changes of cooked ham with reduced sodium content. Traditional sensorial evaluation and objective multimodal machine vision were used. The salt content in the hams was decreased from 3.4% to 1.4%, and 25% of the Na(+ was replaced by K(+. The salt reduction had highest influence on the sensory attributes salty taste, after taste, tenderness, hardness and color hue. The multimodal machine vision system showed changes in lightness, as a function of reduced salt content. Compared to the reference ham (3.4% salt, a replacement of Na(+-ions by K(+-ions of 25% gave no significant changes in WHC, moisture, pH, expressed moisture, the sensory profile attributes or the surface lightness and shininess. A further reduction of salt down to 1.7-1.4% salt, led to a decrease in WHC and an increase in expressible moisture.

  7. Combination of power electronic models with the two-dimensional finite element analysis of electrical machines

    Science.gov (United States)

    Vaeaenaenen, J.

    1994-04-01

    An analysis method for power electronic drives of electrical machines is presented. The machine is modeled by a two dimensional finite element method which allows the presence of magnetically nonlinear materials and the motion of the rotor. The power electronic device connected to the machine is modeled by a nonlinear circuit model. The field and the circuit equations are coupled together as a system of equations. The power electronic circuit can have a general topology given by a net-list type input file. Specific attention is paid to the numerical stability and efficiency of the combined field-circuit formulation. The computational efficiency and the numerical reliability of the method is investigated with the aid of theoretical cases. According to results, the inclusion of the nonlinear circuit model does not increase the computational costs significantly, provided that the sparsity of the system equations is preserved. The method is tested with three practical examples. The results obtained by the method are compared with the measured ones. The first example is a permanent magnet generator feeding a diode-rectifier. In the second example, a filter circuit is added in parallel with the rectifier. The third example is a cage-induction motor fed by a static frequency converter. The computed results agree well with the measured ones.

  8. Social collective intelligence combining the powers of humans and machines to build a smarter society

    CERN Document Server

    Miorandi, Daniele; Rovatsos, Michael

    2014-01-01

    The book focuses on Social Collective Intelligence, a term used to denote a class of socio-technical systems that combine, in a coordinated way, the strengths of humans, machines and collectives in terms of competences, knowledge and problem solving capabilities with the communication, computing and storage capabilities of advanced ICT.Social Collective Intelligence opens a number of challenges for researchers in both computer science and social sciences; at the same time it provides an innovative approach to solve challenges in diverse application domains, ranging from health to education

  9. Combination of Universal Mechanical Testing Machine with Atomic Force Microscope for Materials Research

    Science.gov (United States)

    Zhong, Jian; He, Dannong

    2015-08-01

    Surface deformation and fracture processes of materials under external force are important for understanding and developing materials. Here, a combined horizontal universal mechanical testing machine (HUMTM)-atomic force microscope (AFM) system is developed by modifying UMTM to combine with AFM and designing a height-adjustable stabilizing apparatus. Then the combined HUMTM-AFM system is evaluated. Finally, as initial demonstrations, it is applied to analyze the relationship among macroscopic mechanical properties, surface nanomorphological changes under external force, and fracture processes of two kinds of representative large scale thin film materials: polymer material with high strain rate (Parafilm) and metal material with low strain rate (aluminum foil). All the results demonstrate the combined HUMTM-AFM system overcomes several disadvantages of current AFM-combined tensile/compression devices including small load force, incapability for large scale specimens, disability for materials with high strain rate, and etc. Therefore, the combined HUMTM-AFM system is a promising tool for materials research in the future.

  10. A combination of HARMONIE short time direct normal irradiance forecasts and machine learning: The #hashtdim procedure

    Science.gov (United States)

    Gastón, Martín; Fernández-Peruchena, Carlos; Körnich, Heiner; Landelius, Tomas

    2017-06-01

    The present work describes the first approach of a new procedure to forecast Direct Normal Irradiance (DNI): the #hashtdim that treats to combine ground information and Numerical Weather Predictions. The system is centered in generate predictions for the very short time. It combines the outputs from the Numerical Weather Prediction Model HARMONIE with an adaptive methodology based on Machine Learning. The DNI predictions are generated with 15-minute and hourly temporal resolutions and presents 3-hourly updates. Each update offers forecasts to the next 12 hours, the first nine hours are generated with 15-minute temporal resolution meanwhile the last three hours present hourly temporal resolution. The system is proved over a Spanish emplacement with BSRN operative station in south of Spain (PSA station). The #hashtdim has been implemented in the framework of the Direct Normal Irradiance Nowcasting methods for optimized operation of concentrating solar technologies (DNICast) project, under the European Union's Seventh Programme for research, technological development and demonstration framework.

  11. Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted Boltzmann machines

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2016-01-01

    unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both...... for describing the training data and for classification. We present experiments with feature learning for lung texture classification and airway detection in CT images. In both applications, a combination of learning objectives outperformed purely discriminative or generative learning, increasing, for instance......, the lung tissue classification accuracy by 1 to 8 percentage points. This shows that discriminative learning can help an otherwise unsupervised feature learner to learn filters that are optimized for classification....

  12. Identification of Type 2 Diabetes-associated combination of SNPs using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Park Keun-Joon

    2010-04-01

    Full Text Available Abstract Background Type 2 diabetes mellitus (T2D, a metabolic disorder characterized by insulin resistance and relative insulin deficiency, is a complex disease of major public health importance. Its incidence is rapidly increasing in the developed countries. Complex diseases are caused by interactions between multiple genes and environmental factors. Most association studies aim to identify individual susceptibility single markers using a simple disease model. Recent studies are trying to estimate the effects of multiple genes and multi-locus in genome-wide association. However, estimating the effects of association is very difficult. We aim to assess the rules for classifying diseased and normal subjects by evaluating potential gene-gene interactions in the same or distinct biological pathways. Results We analyzed the importance of gene-gene interactions in T2D susceptibility by investigating 408 single nucleotide polymorphisms (SNPs in 87 genes involved in major T2D-related pathways in 462 T2D patients and 456 healthy controls from the Korean cohort studies. We evaluated the support vector machine (SVM method to differentiate between cases and controls using SNP information in a 10-fold cross-validation test. We achieved a 65.3% prediction rate with a combination of 14 SNPs in 12 genes by using the radial basis function (RBF-kernel SVM. Similarly, we investigated subpopulation data sets of men and women and identified different SNP combinations with the prediction rates of 70.9% and 70.6%, respectively. As the high-throughput technology for genome-wide SNPs improves, it is likely that a much higher prediction rate with biologically more interesting combination of SNPs can be acquired by using this method. Conclusions Support Vector Machine based feature selection method in this research found novel association between combinations of SNPs and T2D in a Korean population.

  13. Combined Vision and Wearable Sensors-based System for Movement Analysis in Rehabilitation.

    Science.gov (United States)

    Spasojević, Sofija; Ilić, Tihomir V; Milanović, Slađan; Potkonjak, Veljko; Rodić, Aleksandar; Santos-Victor, José

    2017-03-23

    Traditional rehabilitation sessions are often a slow, tedious, disempowering and non-motivational process, supported by clinical assessment tools, i.e. evaluation scales that are prone to subjective rating and imprecise interpretation of patient's performance. Poor patient motivation and insufficient accuracy are thus critical factors that can be improved by new sensing / processing technologies. We aim to develop a portable and affordable system, suitable for home rehabilitation, which combines vision-based and wearable sensors. We introduce a novel approach for examining and characterizing the rehabilitation movements, using quantitative descriptors. We propose new Movement Performance Indicators (MPIs) that are extracted directly from sensor data and quantify the symmetry, velocity, and acceleration of the movement of different body/hand parts, and that can potentially be used by therapists for diagnosis and progress assessment. First, a set of rehabilitation exercises is defined, with the supervision of neurologists and therapists for the specific case of Parkinson's disease. It comprises full-body movements measured with a Kinect device and fine hand movements, acquired with a data glove. Then, the sensor data is used to compute 25 Movement Performance Indicators, to assist the diagnosis and progress monitoring (assessing the disease stage) in Parkinson's disease. A kinematic hand model is developed for data verification and as an additional resource for extracting supplementary movement information. Our results show that the proposed Movement Performance Indicators are relevant for the Parkinson's disease assessment. This is further confirmed by correlation of the proposed indicators with clinical tapping test and UPDRS clinical scale. Classification results showed the potential of these indicators to discriminate between the patients and controls, as well as between the stages that characterize the evolution of the disease. The proposed sensor system

  14. Automated systems based on machine vision for inspecting citrus fruits from the field to postharvest - A review

    OpenAIRE

    CUBERO GARCÍA, SERGIO; Lee, Won Suk; Aleixos Borrás, María Nuria; Albert Gil, Francisco Eugenio; BLASCO IVARS, JOSE

    2016-01-01

    Computer vision systems are becoming a scientific but also a commercial tool for food quality assessment. In the field, these systems can be used to predict yield, as well as for robotic harvesting or the early detection of potentially dangerous diseases. In postharvest handling, it is mostly used for the automated inspection of the external quality of the fruits and for sorting them into commercial categories at very high speed. More recently, the use of hyperspectral imaging is allowing not...

  15. MEDEA: Automated Measure and on-line Analysis in Astronomy and Astrophysics for Very Large Vision Machine

    OpenAIRE

    Iovane, G.

    2002-01-01

    MEDEA is a software architecture to detect luminosity variations connected with the discovery of new planet outside the Solar System. Taking into account the enormous number of stars to monitor for our aim traditional approaches are very demanding in terms of computing time; here, the implementation of an automatic vision and decision system, which allows to perform an on-line discrimination of possible events by using two levels of trigger and a quasi-on-line data analysis, is presented. MED...

  16. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    Directory of Open Access Journals (Sweden)

    Wang Xufeng

    2015-12-01

    Full Text Available Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR. To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental conditions, without using a set of infrared light emitting diodes (LEDs on the parachute part of the drogue. Secondly, considering camera lens distortion, a monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, with the drogue attitude provided. Finally, experiments are conducted to demonstrate the effectiveness of the proposed method. Experimental results show the performances of the entire system in contrast with other methods, which validates that the proposed method can recognize and locate the drogue three dimensionally, rapidly and precisely.

  17. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  18. Combining Benford's Law and machine learning to detect money laundering. An actual Spanish court case.

    Science.gov (United States)

    Badal-Valero, Elena; Alvarez-Jareño, José A; Pavía, Jose M

    2018-01-01

    This paper is based on the analysis of the database of operations from a macro-case on money laundering orchestrated between a core company and a group of its suppliers, 26 of which had already been identified by the police as fraudulent companies. In the face of a well-founded suspicion that more companies have perpetrated criminal acts and in order to make better use of what are very limited police resources, we aim to construct a tool to detect money laundering criminals. We combine Benford's Law and machine learning algorithms (logistic regression, decision trees, neural networks, and random forests) to find patterns of money laundering criminals in the context of a real Spanish court case. After mapping each supplier's set of accounting data into a 21-dimensional space using Benford's Law and applying machine learning algorithms, additional companies that could merit further scrutiny are flagged up. A new tool to detect money laundering criminals is proposed in this paper. The tool is tested in the context of a real case. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Synergistic target combination prediction from curated signaling networks: Machine learning meets systems biology and pharmacology.

    Science.gov (United States)

    Chua, Huey Eng; Bhowmick, Sourav S; Tucker-Kellogg, Lisa

    2017-10-01

    Given a signaling network, the target combination prediction problem aims to predict efficacious and safe target combinations for combination therapy. State-of-the-art in silico methods use Monte Carlo simulated annealing (mcsa) to modify a candidate solution stochastically, and use the Metropolis criterion to accept or reject the proposed modifications. However, such stochastic modifications ignore the impact of the choice of targets and their activities on the combination's therapeutic effect and off-target effects, which directly affect the solution quality. In this paper, we present mascot, a method that addresses this limitation by leveraging two additional heuristic criteria to minimize off-target effects and achieve synergy for candidate modification. Specifically, off-target effects measure the unintended response of a signaling network to the target combination and is often associated with toxicity. Synergy occurs when a pair of targets exerts effects that are greater than the sum of their individual effects, and is generally a beneficial strategy for maximizing effect while minimizing toxicity. mascot leverages on a machine learning-based target prioritization method which prioritizes potential targets in a given disease-associated network to select more effective targets (better therapeutic effect and/or lower off-target effects); and on Loewe additivity theory from pharmacology which assesses the non-additive effects in a combination drug treatment to select synergistic target activities. Our experimental study on two disease-related signaling networks demonstrates the superiority of mascot in comparison to existing approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Combining PSSM and physicochemical feature for protein structure prediction with support vector machine

    Science.gov (United States)

    Kurniawan, I.; Haryanto, T.; Hasibuan, L. S.; Agmalaro, M. A.

    2017-05-01

    Protein is one of the giant biomolecules that act as the main component of the organism. Protein is formed from building blocks namely amino acids. Hierarchically, the structure of protein is divided into four levels: primary, secondary, tertiary, and quaternary structure. Protein secondary structure is formed by amino acid sequences that would form three-dimensional structures and have information about the tertiary structure and function of proteins. This study used 277,389 protein residue data from enzyme categories. Position-specific scoring matrix (PSSM) profile and physicochemical are used for features. This study developed support vector machine models to predict the protein secondary structure by recognizing patterns of amino acid sequences. The Q3 results showed that the best scores obtained are 93.16% from the dataset that has 260 features with the radial kernel. Combining PSSM and physicochemical feature additions can be used for prediction.

  1. Analysis of EEG signals by combining eigenvector methods and multiclass support vector machines.

    Science.gov (United States)

    Derya Ubeyli, Elif

    2008-01-01

    A new approach based on the implementation of multiclass support vector machine (SVM) with the error correcting output codes (ECOC) is presented for classification of electroencephalogram (EEG) signals. In practical applications of pattern recognition, there are often diverse features extracted from raw data which needs recognizing. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The aim of the study is classification of the EEG signals by the combination of eigenvector methods and multiclass SVM. The purpose is to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. The present research demonstrated that the eigenvector methods are the features which well represent the EEG signals and the multiclass SVM trained on these features achieved high classification accuracies.

  2. A new vision through combined osteo-odonto-keratoplasty: A review

    Directory of Open Access Journals (Sweden)

    Lakshmi Shetty

    2014-01-01

    Full Text Available This is an extensive review of osteo-odonto-keratoplasty (OOKP where the vision is restored by using tooth as an implant in the environment of the eye. The window of the soul is our eye and the window of the eye is cornea. This review article aims at discussing the remarkable operation to regain the sight of patients with corneal blindness. In this procedure where a multidisciplinary approach from both oral and maxillofacial surgeon and ophthalmologist contributes to restore vision in the most severe cases of corneal blindness. This involves removing a canine tooth from the patient, shaping and drilling to allow implantation of an artificial plastic corneal device and finally implanting back into the eye few months later. The OOKP, is the keratoprosthesis of choice for end-stage corneal blindness not amenable to penetrating keratoplasty. This transplantation procedure has an autologous dental root-bone lamina complex and buccal mucosal graft to secure the optical cylinder which acts as a ray of vision for corneal blindness. This review comprises the indications, contraindications, and patient assessment and the surgical procedure, complications ,surgical inter professionalism and future scope of OOKP. The source of data for the review has been Pubmed, Medline and all the research studies and published reports on osteo-odonto-keratoplasty. In this complex procedure good results can be obtained with modern technology and expertise.

  3. Combined effect of vision and hearing impairment on depression in elderly Chinese.

    Science.gov (United States)

    Chou, Kee-Lee; Chi, Iris

    2004-09-01

    Sensory impairment and depression are common in old age and the relation between depression and vision as well as hearing impairment have been established. However, few studies have directly compared their effects and examined the impact of dual sensory loss. The purpose of this study is to compare impacts of self-reported hearing and vision loss as well as the effect of double sensory impairment on depression. This article analyzes cross-sectional data collected from a representative community sample of 2,003 Chinese elderly people aged 60 or above in Hong Kong. Respondents were interviewed in a face-to-face format and data including vision and hearing impairment, socio-demographic variables, health indicators, family support, and depression were obtained. Logistic regression analyses revealed that visual impairment was significantly related to depression even after age, gender, marital status, education, self-reported health status, the presence of 11 diseases, functional limitation and family support were controlled but hearing loss was not. Hearing impairment did not add to the likelihood of depression where visual impairment was already present. The impact of visual impairment on psychological well-being among elderly Chinese is more robust than hearing loss. Therefore, aged care service practitioners must take this risk factor into consideration in their preventive intervention and treatment for psychological well-being.

  4. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  5. Combining machine learning, crowdsourcing and expert knowledge to detect chemical-induced diseases in text.

    Science.gov (United States)

    Bravo, Àlex; Li, Tong Shu; Su, Andrew I; Good, Benjamin M; Furlong, Laura I

    2016-01-01

    Drug toxicity is a major concern for both regulatory agencies and the pharmaceutical industry. In this context, text-mining methods for the identification of drug side effects from free text are key for the development of up-to-date knowledge sources on drug adverse reactions. We present a new system for identification of drug side effects from the literature that combines three approaches: machine learning, rule- and knowledge-based approaches. This system has been developed to address the Task 3.B of Biocreative V challenge (BC5) dealing with Chemical-induced Disease (CID) relations. The first two approaches focus on identifying relations at the sentence-level, while the knowledge-based approach is applied both at sentence and abstract levels. The machine learning method is based on the BeFree system using two corpora as training data: the annotated data provided by the CID task organizers and a new CID corpus developed by crowdsourcing. Different combinations of results from the three strategies were selected for each run of the challenge. In the final evaluation setting, the system achieved the highest Recall of the challenge (63%). By performing an error analysis, we identified the main causes of misclassifications and areas for improving of our system, and highlighted the need of consistent gold standard data sets for advancing the state of the art in text mining of drug side effects.Database URL: https://zenodo.org/record/29887?ln¼en#.VsL3yDLWR_V. © The Author(s) 2016. Published by Oxford University Press.

  6. Simulation of the «COSMONAUT-ROBOT» System Interaction on the Lunar Surface Based on Methods of Machine Vision and Computer Graphics

    Science.gov (United States)

    Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.

    2017-05-01

    Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.

  7. Functional copmponents produced by multi-jet modelling combined with electroforming and machining

    Directory of Open Access Journals (Sweden)

    Baier, Oliver

    2014-08-01

    Full Text Available In fuel cell technology, certain components are used that are responsible for guiding liquid media. When these components are produced by conventional manufacturing, there are often sealing issues, and trouble- and maintenance-free deployment cannot be ensured. Against this background, a new process combination has been developed in a joint project between the University of Duisburg-Essen, the Center for Fuel Cell Technology (ZBT, and the company Galvano-T electroplating forming GmbH. The approach is to combine multi-jet modelling (MJM, electroforming and milling in order to produce a defined external geometry. The wax models are generated on copper base plates and copper-coated to a desirable thickness. Following this, the undefined electroplated surfaces are machined to achieve the desired measurement, and the wax is melted out. This paper presents, first, how this process is technically feasible, then describes how the MJM on a 3-D Systems ThermoJet was adapted to stabilise the process.In the AiF-sponsored ZIM project, existing limits and possibilities are shown and different approaches of electroplating are investigated. This paper explores whether or not activation of the wax structure by a conductive initial layer is required. Using the described process chain, different parts were built: a heat exchanger, a vaporiser, and a reformer (in which pellets were integrated in an intermediate step. In addition, multiple-layer parts with different functions were built by repeating the process combination several times.

  8. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    Science.gov (United States)

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  9. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain.

    Science.gov (United States)

    Tighe, Patrick J; Harle, Christopher A; Hurley, Robert W; Aytug, Haldun; Boezaart, Andre P; Fillingim, Roger B

    2015-07-01

    Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8,071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor (k-NN), with logistic regression included for baseline comparison. In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-NN algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. Wiley Periodicals, Inc.

  10. Prognostic Value of Combined Clinical and Myocardial Perfusion Imaging Data Using Machine Learning.

    Science.gov (United States)

    Betancur, Julian; Otaki, Yuka; Motwani, Manish; Fish, Mathews B; Lemley, Mark; Dey, Damini; Gransar, Heidi; Tamarappoo, Balaji; Germano, Guido; Sharir, Tali; Berman, Daniel S; Slomka, Piotr J

    2017-10-16

    This study evaluated the added predictive value of combining clinical information and myocardial perfusion single-photon emission computed tomography (SPECT) imaging (MPI) data using machine learning (ML) to predict major adverse cardiac events (MACE). Traditionally, prognostication by MPI has relied on visual or quantitative analysis of images without objective consideration of the clinical data. ML permits a large number of variables to be considered in combination and at a level of complexity beyond the human clinical reader. A total of 2,619 consecutive patients (48% men; 62 ± 13 years of age) who underwent exercise (38%) or pharmacological stress (62%) with high-speed SPECT MPI were monitored for MACE. Twenty-eight clinical variables, 17 stress test variables, and 25 imaging variables (including total perfusion deficit [TPD]) were recorded. Areas under the receiver-operating characteristic curve (AUC) for MACE prediction were compared among: 1) ML with all available data (ML-combined); 2) ML with only imaging data (ML-imaging); 3) 5-point scale visual diagnosis (physician [MD] diagnosis); and 4) automated quantitative imaging analysis (stress TPD and ischemic TPD). ML involved automated variable selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross validation. During follow-up (3.2 ± 0.6 years), 239 patients (9.1%) had MACE. MACE prediction was significantly higher for ML-combined than ML-imaging (AUC: 0.81 vs. 0.78; p clinical and imaging data variables was found to have high predictive accuracy for 3-year risk of MACE and was superior to existing visual or automated perfusion assessments. ML could allow integration of clinical and imaging data for personalized MACE risk computations in patients undergoing SPECT MPI. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    Science.gov (United States)

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.

  12. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Science.gov (United States)

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  13. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    Directory of Open Access Journals (Sweden)

    Anyela Camargo

    Full Text Available Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  14. Modeling and Forecast Biological Oxygen Demand (BOD using Combination Support Vector Machine with Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Abazar Solgi

    2017-06-01

    given from Fourier transform that was introduced in the nineteenth-century. Overall, concept of wavelet transform for current theory was presented by Morlet and a team under the supervision of Alex Grossman at the Research Center for Theoretical Physics Marcel in France. After the parameters decomposition using wavelet analysis and using principal component analysis (PCA, the main components were determined. These components are then used as input to the support vector machine model to obtain a hybrid model of Wavelet-SVM (WSVM. For this study, a series of monthly of BOD in Karun River in Molasani station and auxiliary variables dissolved oxygen (DO, temperature and monthly river flow in a 13 years period (2002-2014 were used. Results and Discussion: To run the SVM model, seven different combinations were evaluated. Combination 6 which was contained of 4 parameters including BOD, dissolved oxygen (DO, temperature and monthly river flow with a time lag have best performance. The best structure had RMSE equal to 0.0338 and the coefficient of determination equal to 0.84. For achieving the results of the WSVM, the wavelet transform and input parameters were decomposed to sub-signal, then this sub-signals were studied with Principal component analysis (PCA method and important components were entered as inputs to SVM model to obtain the hybrid model WSVM. After numerous run this program in certain modes and compare them with each other, the results was obtained. One of the key points about the choice of the mother wavelet is the time series. So, the patterns of the mother wavelet functions that can better adapt to diagram curved of time series can do the mappings operation and therefore will have better results. In this study, according to different wavelet tests and according to the above note, four types of mother wavelet functions Haar, Db2, Db7 and Sym3 were selected. Conclusions: Compare the results of the monthly modeling indicate that the use of wavelet transforms can

  15. Teamwork: improved eQTL mapping using combinations of machine learning methods.

    Directory of Open Access Journals (Sweden)

    Marit Ackermann

    Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.

  16. Combining Machine Learning and Natural Language Processing to Assess Literary Text Comprehension

    Science.gov (United States)

    Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S.

    2017-01-01

    This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…

  17. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    Science.gov (United States)

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  18. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    Science.gov (United States)

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  19. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2017-06-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  20. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  1. Combined pigmentary and structural effects tune wing scale coloration to color vision in the swallowtail butterfly Papilio xuthus.

    Science.gov (United States)

    Stavenga, Doekele G; Matsushita, Atsuko; Arikawa, Kentaro

    2015-01-01

    Butterflies have well-developed color vision, presumably optimally tuned to the detection of conspecifics by their wing coloration. Here we investigated the pigmentary and structural basis of the wing colors in the Japanese yellow swallowtail butterfly, Papilio xuthus, applying spectrophotometry, scatterometry, light and electron microscopy, and optical modeling. The about flat lower lamina of the wing scales plays a crucial role in wing coloration. In the cream, orange and black scales, the lower lamina is a thin film with thickness characteristically depending on the scale type. The thin film acts as an interference reflector, causing a structural color that is spectrally filtered by the scale's pigment. In the cream and orange scales, papiliochrome pigment is concentrated in the ridges and crossribs of the elaborate upper lamina. In the black scales the upper lamina contains melanin. The blue scales are unpigmented and their structure differs strongly from those of the pigmented scales. The distinct blue color is created by the combination of an optical multilayer in the lower lamina and a fine-structured upper lamina. The structural and pigmentary scale properties are spectrally closely related, suggesting that they are under genetic control of the same key enzymes. The wing reflectance spectra resulting from the tapestry of scales are well discriminable by the Papilio color vision system.

  2. On-line measurement of ski-jumper trajectory: combining stereo vision and shape description

    Science.gov (United States)

    Nunner, T.; Sidla, O.; Paar, G.; Nauschnegg, B.

    2010-01-01

    Ski jumping has continuously raised major public interest since the early 70s of the last century, mainly in Europe and Japan. The sport undergoes high-level analysis and development, among others, based on biodynamic measurements during the take-off and flight phase of the jumper. We report on a vision-based solution for such measurements that provides a full 3D trajectory of unique points on the jumper's shape. During the jump synchronized stereo images are taken by a calibrated camera system in video rate. Using methods stemming from video surveillance, the jumper is detected and localized in the individual stereo images, and learning-based deformable shape analysis identifies the jumper's silhouette. The 3D reconstruction of the trajectory takes place on standard stereo forward intersection of distinct shape points, such as helmet top or heel. In the reported study, the measurements are being verified by an independent GPS measurement mounted on top of the Jumper's helmet, synchronized to the timing of camera exposures. Preliminary estimations report an accuracy of +/-20 cm in 30 Hz imaging frequency within 40m trajectory. The system is ready for fully-automatic on-line application on ski-jumping sites that allow stereo camera views with an approximate base-distance ratio of 1:3 within the entire area of investigation.

  3. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    Science.gov (United States)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  4. Low Vision

    Science.gov (United States)

    ... USAJobs Home > Statistics and Data > Low Vision Low Vision Low Vision Defined: Low Vision is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 ...

  5. Fractured Visions

    DEFF Research Database (Denmark)

    Bonde, Inger Ellekilde

    2016-01-01

    In the post-war period a heterogeneous group of photographers articulate a new photographic approach to the city as motive in a photographic language that combines intense formalism with subjective vision. This paper analyses the photobook Fragments of a City published in 1960 by Danish photograp...

  6. Combining a Novel Computer Vision Sensor with a Cleaning Robot to Achieve Autonomous Pig House Cleaning

    DEFF Research Database (Denmark)

    Andersen, Nils Axel; Braithwaite, Ian David; Blanke, Mogens

    2005-01-01

    Cleaning of livestock buildings is the single most health-threatening task in the agricultural industry and a transition to robot-based cleaning would be instrumental to improving working conditions for employees. Present cleaning robots fall short on cleanness quality, as they cannot perform...... condition based cleaning. This paper describes how a novel sensor, developed for the purpose, and algorithms for classification and learning are combined with a commercial robot to obtain an autonomous system which meets the necessary quality attributes. These include features to make selective cleaning...... where dirty areas are detected, that operator assistance is called only when cleanness hypothesis cannot be made with confidence. The paper describes the design of the system where learning from experience maps and operator instructions are combined to obtain a smart and autonomous cleaning robot....

  7. Combining Gas Bearing and Smart Material Technologies for Improved Machine Performance Theory and Experiment

    DEFF Research Database (Denmark)

    Nielsen, Bo Bjerregaard

    According to industry leaders, the world is on the verge of the fourth industrial revolution in which the Internet of Things and cyber-physical systems are central concepts. Where the previous industrial revolution evolved around electronics, IT and automated production on machine level, Industry 4.0...

  8. Games and Machine Learning: A Powerful Combination in an Artificial Intelligence Course

    Science.gov (United States)

    Wallace, Scott A.; McCartney, Robert; Russell, Ingrid

    2010-01-01

    Project MLeXAI [Machine Learning eXperiences in Artificial Intelligence (AI)] seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense--a simple real-time strategy game…

  9. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  10. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    Science.gov (United States)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  11. Development of a machine combination for harvesting of small wood first thinnings; Yhdistelmaekoneen kehittaeminen pienpuun korjuuseen sekae ensi- harvennukseen

    Energy Technology Data Exchange (ETDEWEB)

    Nevalainen, P. [Outokummun Metalli Oy, Outokumpu (Finland)

    1997-12-01

    The aim of the project is to build combined machine for the harvesting of the first thinning, which makes both harvesting and forwarding. Original purpose has been extended to concern also the harvesting head itself, which is connected to the base machine and which is able to perform cutting, delimbing and transportation. This method is only meant to be used to harvest energy wood. It should be developed the crown cutting method for this device. The basic idea of this harvesting head is usable, but technical solutions of functions should be reconstructed. The `guillotine-cutting` is usable. The diameter of cut stem should be 250-300 mm. In the future we will try to develop a device, which is able to make also delimbing if needed. This head is proper for first thinning harvesting. (orig.)

  12. A Hybrid Machine Learning Method for Fusing fMRI and Genetic Data: Combining both Improves Classification of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Honghui Yang

    2010-10-01

    Full Text Available We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI and single nucleotide polymorphism (SNP data. The method consists of four stages: (1 SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME. (2 Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME. (3 Components of fMRI activation obtained with independent component analysis (ICA are used to construct a single SVM classifier (ICA-SVMC. (4 The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI. The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls. The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder.

  13. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  14. 3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning

    Science.gov (United States)

    Yang, Xiaofeng; Fei, Baowei

    2012-02-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 +/- 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.

  15. Combining Psychological Models with Machine Learning to Better Predict People’s Decisions

    Science.gov (United States)

    2012-03-09

    Triangle Park, NC 27709-2211 15. SUBJECT TERMS psychological models, machine learning, predicting decisions Avi Rosenfeld, Inon Zukerman, Amos Azaria...Inon Zukerman3, Amos Azaria2, Sarit Kraus2,4 1Department of Industrial Engineering Jerusalem College of Technology, Jerusalem, Israel 91160...1998; Kahneman & Tversky , 1979). To computer scientists, accurately predicting people’s actions is critical for mixed human-computer systems such as

  16. Machine Learning Approach to Optimizing Combined Stimulation and Medication Therapies for Parkinson's Disease.

    Science.gov (United States)

    Shamir, Reuben R; Dolber, Trygve; Noecker, Angela M; Walter, Benjamin L; McIntyre, Cameron C

    2015-01-01

    Deep brain stimulation (DBS) of the subthalamic region is an established therapy for advanced Parkinson's disease (PD). However, patients often require time-intensive post-operative management to balance their coupled stimulation and medication treatments. Given the large and complex parameter space associated with this task, we propose that clinical decision support systems (CDSS) based on machine learning algorithms could assist in treatment optimization. Develop a proof-of-concept implementation of a CDSS that incorporates patient-specific details on both stimulation and medication. Clinical data from 10 patients, and 89 post-DBS surgery visits, were used to create a prototype CDSS. The system was designed to provide three key functions: (1) information retrieval; (2) visualization of treatment, and; (3) recommendation on expected effective stimulation and drug dosages, based on three machine learning methods that included support vector machines, Naïve Bayes, and random forest. Measures of medication dosages, time factors, and symptom-specific pre-operative response to levodopa were significantly correlated with post-operative outcomes (P learning algorithms were able to accurately predict 86% (12/14) of the motor improvement scores at one year after surgery. Using patient-specific details, an appropriately parameterized CDSS could help select theoretically optimal DBS parameter settings and medication dosages that have potential to improve the clinical management of PD patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Combining retinal nerve fiber layer thickness with individual retinal blood vessel locations allows modeling of central vision loss in glaucoma

    Science.gov (United States)

    Wang, Hui; Wang, Mengyu; Baniasadi, Neda; Jin, Qingying; Elze, Tobias

    2017-02-01

    Purpose: To assess whether modeling of central vision loss (CVL) due to glaucoma by optical coherence tomography (OCT) retinal nerve fiber (RNF) layer thickness (RNFLT) can be improved by including the location of the major inferior temporal retinal artery (ITA), a known correlate of individual RNF geometry. Methods: Pat- tern deviations of the two locations of the Humphrey 24-2 visual field (VF) known to be specifically vulnerable to glaucomatous CVL and OCT RNFLT on the corresponding circumpapillary sector around the optic nerve head within the radius of 1.73mm were retrospectively selected from 428 eyes of 428 patients of a large clinical glaucoma service. ITA was marked on the 1.73mm circle by a trained observer. Linear regression models were fitted with CVL as dependent variable and VF mean deviation (MD) plus either of (1) RNFLT, (2) ITA, and (3) their combination, respectively, as regressors. To assess CVL over all levels of glaucoma severity, the three models were compared to a null model containing only MD. A Baysian model comparison was performed with the Bayes Factor (BF) as measure of strength of evidence (BF20: strong evidence over null model). Results: Neither RNFLT (BF=0.9) nor ITA (BF=1.4) alone provided positive evidence over the null model, but their combination resulted in a model with strong evidence (BF=21.4). Conclusion: While the established circumpapillary RNFLT sector, based on population statistics, could not satisfactorily model CVL, the inclusion of a retinal parameter related to individual eye anatomy yielded a strong structure-function model.

  18. Correction: An integrated anti-arrhythmic target network of compound Chinese medicine Wenxin Keli revealed by combined machine learning and molecular pathway analysis.

    Science.gov (United States)

    Wang, Taiyi; Lu, Ming; Du, Qunqun; Yao, Xi; Zhang, Peng; Chen, Xiaonan; Xie, Weiwei; Li, Zheng; Ma, Yuling; Zhu, Yan

    2017-09-26

    Correction for 'An integrated anti-arrhythmic target network of a Chinese medicine compound, Wenxin Keli, revealed by combined machine learning and molecular pathway analysis' by Taiyi Wang et al., Mol. BioSyst., 2017, 13, 1018-1030.

  19. Cartesian visions.

    Science.gov (United States)

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  20. Support Vector Machines Parameter Selection Based on Combined Taguchi Method and Staelin Method for E-mail Spam Filtering

    Directory of Open Access Journals (Sweden)

    Wei-Chih Hsu

    2012-04-01

    Full Text Available Support vector machines (SVM are a powerful tool for building good spam filtering models. However, the performance of the model depends on parameter selection. Parameter selection of SVM will affect classification performance seriously during training process. In this study, we use combined Taguchi method and Staelin method to optimize the SVM-based E-mail Spam Filtering model and promote spam filtering accuracy. We compare it with other parameters optimization methods, such as grid search. Six real-world mail data sets are selected to demonstrate the effectiveness and feasibility of the method. The results show that our proposed methods can find the effective model with high classification accuracy

  1. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  2. Combining decoder design and neural adaptation in brain-machine interfaces.

    Science.gov (United States)

    Shenoy, Krishna V; Carmena, Jose M

    2014-11-19

    Brain-machine interfaces (BMIs) aim to help people with paralysis by decoding movement-related neural signals into control signals for guiding computer cursors, prosthetic arms, and other assistive devices. Despite compelling laboratory experiments and ongoing FDA pilot clinical trials, system performance, robustness, and generalization remain challenges. We provide a perspective on how two complementary lines of investigation, that have focused on decoder design and neural adaptation largely separately, could be brought together to advance BMIs. This BMI paradigm should also yield new scientific insights into the function and dysfunction of the nervous system. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. PainVision® Apparatus for Assessment of Efficacy of Pulsed Radiofrequency Combined with Pharmacological Therapy in the Treatment of Postherpetic Neuralgia and Correlations with Measurements

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2017-01-01

    Full Text Available Objective. PainVision device was a developed application for the evaluation of pain intensity. The objective was to assess the efficacy and safety of pulsed radiofrequency (PRF combined with pharmacological therapy in the treatment of postherpetic neuralgia (PHN. We also discussed the correlation of the measurements. Method. Forty patients with PHN were randomized for treatment with PRF combined with pharmacological therapy (PRF group, n=20 or pharmacological therapy (control group, n=20 at postoperative 48 hours. The efficacy measure was pain degree (PD that was assessed by PainVision and visual analog scale (VAS, short form Mcgill pain questionnaire (SF-Mcgill, and numeric rate scale sleep interference score (NRSSIS. Correlations between PD, VAS, SF-Mcgill, and NRSSIS were determined. Results. The PD for persistent pain (PP and breakthrough pain (BTP at postoperative 48 hours assessed by PainVision were significantly lower in PRF group than in control group (PD-PP, P<0.01; PD-BTP, P<0.01. PD and VAS were highly correlated for both persistent pain (r=0.453, ρ=0.008 and breakthrough pain (r=0.64, ρ=0.001. Conclusion. PRF was well tolerated and superior to isolated pharmacological therapy in the treatment of PHN. PainVision device showed great value in the evaluation of pain intensity and PD had an excellent correlation with VAS and SF-Mcgill.

  4. Combining human and machine intelligence to derive agents' behavioral rules for groundwater irrigation

    Science.gov (United States)

    Hu, Yao; Quinn, Christopher J.; Cai, Ximing; Garfinkle, Noah W.

    2017-11-01

    For agent-based modeling, the major challenges in deriving agents' behavioral rules arise from agents' bounded rationality and data scarcity. This study proposes a "gray box" approach to address the challenge by incorporating expert domain knowledge (i.e., human intelligence) with machine learning techniques (i.e., machine intelligence). Specifically, we propose using directed information graph (DIG), boosted regression trees (BRT), and domain knowledge to infer causal factors and identify behavioral rules from data. A case study is conducted to investigate farmers' pumping behavior in the Midwest, U.S.A. Results show that four factors identified by the DIG algorithm- corn price, underlying groundwater level, monthly mean temperature and precipitation- have main causal influences on agents' decisions on monthly groundwater irrigation depth. The agent-based model is then developed based on the behavioral rules represented by three DIGs and modeled by BRTs, and coupled with a physically-based groundwater model to investigate the impacts of agents' pumping behavior on the underlying groundwater system in the context of coupled human and environmental systems.

  5. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology

    Science.gov (United States)

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-01

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861 cm- 1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082 cm- 1 (v1 band), 712 cm- 1 and 699 cm- 1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862 cm- 1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350 cm- 1, 730-680 cm- 1) and (830-880 cm- 1, 690-725 cm- 1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner.

  6. Rapid identification of pearl powder from Hyriopsis cumingii by Tri-step infrared spectroscopy combined with computer vision technology.

    Science.gov (United States)

    Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua

    2018-01-15

    Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861cm-1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082cm-1 (v1 band), 712cm-1 and 699cm-1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862cm-1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350cm-1, 730-680cm-1) and (830-880cm-1, 690-725cm-1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner. Copyright © 2017. Published by Elsevier B.V.

  7. Prediction of B-cell Linear Epitopes with a Combination of Support Vector Machine Classification and Amino Acid Propensity Identification

    Directory of Open Access Journals (Sweden)

    Hsin-Wei Wang

    2011-01-01

    Full Text Available Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV. Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%, specificity (84.22%, PPV (32.07%, and Matthews' correlation coefficient (10.36%.

  8. Games and machine learning: a powerful combination in an artificial intelligence course

    Science.gov (United States)

    Wallace, Scott A.; McCartney, Robert; Russell, Ingrid

    2010-03-01

    Project MLeXAI (Machine Learning eXperiences in Artificial Intelligence (AI)) seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense - a simple real-time strategy game and Checkers - a classic turn-based board game. From the instructors' prospective, we examine aspects of design and implementation as well as the challenges and rewards of using the curricula. We explore students' responses to the projects via the results of a common survey. Finally, we compare the student perceptions from the game-based projects to non-game based projects from the first phase of Project MLeXAI.

  9. Automated science target selection for future Mars rovers: A machine vision approach for the future ESA ExoMars 2018 rover mission

    Science.gov (United States)

    Tao, Yu; Muller, Jan-Peter

    2013-04-01

    The ESA ExoMars 2018 rover is planned to perform autonomous science target selection (ASTS) using the approaches described in [1]. However, the approaches shown to date have focused on coarse features rather than the identification of specific geomorphological units. These higher-level "geoobjects" can later be employed to perform intelligent reasoning or machine learning. In this work, we show the next stage in the ASTS through examples displaying the identification of bedding planes (not just linear features in rock-face images) and the identification and discrimination of rocks in a rock-strewn landscape (not just rocks). We initially detect the layers and rocks in 2D processing via morphological gradient detection [1] and graph cuts based segmentation [2] respectively. To take this further requires the retrieval of 3D point clouds and the combined processing of point clouds and images for reasoning about the scene. An example is the differentiation of rocks in rover images. This will depend on knowledge of range and range-order of features. We show demonstrations of these "geo-objects" using MER and MSL (released through the PDS) as well as data collected within the EU-PRoViScout project (http://proviscout.eu). An initial assessment will be performed of the automated "geo-objects" using the OpenSource StereoViewer developed within the EU-PRoViSG project (http://provisg.eu) which is released in sourceforge. In future, additional 3D measurement tools will be developed within the EU-FP7 PRoViDE2 project, which started on 1.1.13. References: [1] M. Woods, A. Shaw, D. Barnes, D. Price, D. Long, D. Pullan, (2009) "Autonomous Science for an ExoMars Rover-Like Mission", Journal of Field Robotics Special Issue: Special Issue on Space Robotics, Part II, Volume 26, Issue 4, pages 358-390. [2] J. Shi, J. Malik, (2000) "Normalized Cuts and Image Segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 22. [3] D. Shin, and J.-P. Muller (2009

  10. Development of a machine combination for harvesting of small wood and first thinnings; Yhdistelmaekoneen kehittaeminen pienpuun korjuuseen sekae ensiharvennukseen

    Energy Technology Data Exchange (ETDEWEB)

    Nevalainen, P.; Kinnunen, K. [Outokummun Metalli Oy, Outokumpu (Finland)

    1996-12-31

    The objective of the research was to develop a machine combination for harvesting of small wood, which carries out both the harvesting and forest haulage. The development was started in September 1995. The first prototype of the machine is ready. A Lokomo 910 forest tractor was acquired for the tests. The prototype has been mounted on the tractor, and the tests have been started in the beginning of March 1996. The reconstruction of the device will be made after the tests, as well as the description of different working praxis. Time consumption study and the analysis of it will be made after the equipment tests. The device consists of a grapple equipped with a guillotine cutting device mounted on the tractor. The actual felling is made stem by stem in the test phase. The stem can be forwarded directly into the load or it can be left aside, and new stems can be brought beside it and then all the stems can be taken together into load. The harvested stems can be processed easiest during the forwarding in the upward position, and they will be `felled` into the load space. Hence the space requirement is small so the damaging of the remaining trees can be minimized. The logging road is made driving backwards by felling the trees from the road to the sides of the road and by collecting the stems into load space while returning. The harvested stems will be transported undelimbed to the storage site there they can be processed with multi-function machine or chipped, after the thinning has been completed. The cutting device can be turned aside when using the loading grapple so the operation is similar to operation of an ordinary timber loader

  11. Combining machine learning and ontological data handling for multi-source classification of nature conservation areas

    Science.gov (United States)

    Moran, Niklas; Nieland, Simon; Tintrup gen. Suntrup, Gregor; Kleinschmit, Birgit

    2017-02-01

    Manual field surveys for nature conservation management are expensive and time-consuming and could be supplemented and streamlined by using Remote Sensing (RS). RS is critical to meet requirements of existing laws such as the EU Habitats Directive (HabDir) and more importantly to meet future challenges. The full potential of RS has yet to be harnessed as different nomenclatures and procedures hinder interoperability, comparison and provenance. Therefore, automated tools are needed to use RS data to produce comparable, empirical data outputs that lend themselves to data discovery and provenance. These issues are addressed by a novel, semi-automatic ontology-based classification method that uses machine learning algorithms and Web Ontology Language (OWL) ontologies that yields traceable, interoperable and observation-based classification outputs. The method was tested on European Union Nature Information System (EUNIS) grasslands in Rheinland-Palatinate, Germany. The developed methodology is a first step in developing observation-based ontologies in the field of nature conservation. The tests show promising results for the determination of the grassland indicators wetness and alkalinity with an overall accuracy of 85% for alkalinity and 76% for wetness.

  12. Low Vision Aids and Low Vision Rehabilitation

    Science.gov (United States)

    ... Low Vision Aids Low Vision Resources Low Vision Rehabilitation and Low Vision Aids Leer en Español: La ... that same viewing direction for other objects. Vision rehabilitation: using the vision you have Vision rehabilitation is ...

  13. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  14. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  15. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  16. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  17. Artificial vision.

    Science.gov (United States)

    Zarbin, M; Montemagno, C; Leary, J; Ritch, R

    2011-09-01

    A number treatment options are emerging for patients with retinal degenerative disease, including gene therapy, trophic factor therapy, visual cycle inhibitors (e.g., for patients with Stargardt disease and allied conditions), and cell transplantation. A radically different approach, which will augment but not replace these options, is termed neural prosthetics ("artificial vision"). Although rewiring of inner retinal circuits and inner retinal neuronal degeneration occur in association with photoreceptor degeneration in retinitis pigmentosa (RP), it is possible to create visually useful percepts by stimulating retinal ganglion cells electrically. This fact has lead to the development of techniques to induce photosensitivity in cells that are not light sensitive normally as well as to the development of the bionic retina. Advances in artificial vision continue at a robust pace. These advances are based on the use of molecular engineering and nanotechnology to render cells light-sensitive, to target ion channels to the appropriate cell type (e.g., bipolar cell) and/or cell region (e.g., dendritic tree vs. soma), and on sophisticated image processing algorithms that take advantage of our knowledge of signal processing in the retina. Combined with advances in gene therapy, pathway-based therapy, and cell-based therapy, "artificial vision" technologies create a powerful armamentarium with which ophthalmologists will be able to treat blindness in patients who have a variety of degenerative retinal diseases.

  18. Separating depressive comorbidity from panic disorder: A combined functional magnetic resonance imaging and machine learning approach.

    Science.gov (United States)

    Lueken, Ulrike; Straube, Benjamin; Yang, Yunbo; Hahn, Tim; Beesdo-Baum, Katja; Wittchen, Hans-Ulrich; Konrad, Carsten; Ströhle, Andreas; Wittmann, André; Gerlach, Alexander L; Pfleiderer, Bettina; Arolt, Volker; Kircher, Tilo

    2015-09-15

    Depression is frequent in panic disorder (PD); yet, little is known about its influence on the neural substrates of PD. Difficulties in fear inhibition during safety signal processing have been reported as a pathophysiological feature of PD that is attenuated by depression. We investigated the impact of comorbid depression in PD with agoraphobia (AG) on the neural correlates of fear conditioning and the potential of machine learning to predict comorbidity status on the individual patient level based on neural characteristics. Fifty-nine PD/AG patients including 26 (44%) with a comorbid depressive disorder (PD/AG+DEP) underwent functional magnetic resonance imaging (fMRI). Comorbidity status was predicted using a random undersampling tree ensemble in a leave-one-out cross-validation framework. PD/AG-DEP patients showed altered neural activation during safety signal processing, while +DEP patients exhibited generally decreased dorsolateral prefrontal and insular activation. Comorbidity status was correctly predicted in 79% of patients (sensitivity: 73%; specificity: 85%) based on brain activation during fear conditioning (corrected for potential confounders: accuracy: 73%; sensitivity: 77%; specificity: 70%). No primary depressed patients were available; only medication-free patients were included. Major depression and dysthymia were collapsed (power considerations). Neurofunctional activation during safety signal processing differed between patients with or without comorbid depression, a finding which may explain heterogeneous results across previous studies. These findings demonstrate the relevance of comorbidity when investigating neurofunctional substrates of anxiety disorders. Predicting individual comorbidity status may translate neurofunctional data into clinically relevant information which might aid in planning individualized treatment. The study was registered with the ISRCTN80046034. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Combining machine learning and propensity score weighting to estimate causal effects in multivalued treatments.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Interventions with multivalued treatments are common in medical and health research; examples include comparing the efficacy of competing interventions and contrasting various doses of a drug. In recent years, there has been growing interest in the development of methods that estimate multivalued treatment effects using observational data. This paper extends a previously described analytic framework for evaluating binary treatments to studies involving multivalued treatments utilizing a machine learning algorithm called optimal discriminant analysis (ODA). We describe the differences between regression-based treatment effect estimators and effects estimated using the ODA framework. We then present an empirical example using data from an intervention including three study groups to compare corresponding effects. The regression-based estimators produced statistically significant mean differences between the two intervention groups, and between one of the treatment groups and controls. In contrast, ODA was unable to discriminate between distributions of any of the three study groups. Optimal discriminant analysis offers an appealing alternative to conventional regression-based models for estimating effects in multivalued treatment studies because of its insensitivity to skewed data and use of accuracy measures applicable to all prognostic analyses. If these analytic approaches produce consistent treatment effect P values, this bolsters confidence in the validity of the results. If the approaches produce conflicting treatment effect P values, as they do in our empirical example, the investigator should consider the ODA-derived estimates to be most robust, given that ODA uses permutation P values that require no distributional assumptions and are thus, always valid. © 2016 John Wiley & Sons, Ltd.

  20. ADMET Evaluation in Drug Discovery. 16. Predicting hERG Blockers by Combining Multiple Pharmacophores and Machine Learning Approaches.

    Science.gov (United States)

    Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun

    2016-08-01

    Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.

  1. A new machine-learning method to prognosticate paraquat poisoned patients by combining coagulation, liver, and kidney indices.

    Directory of Open Access Journals (Sweden)

    Lufeng Hu

    Full Text Available The prognosis of paraquat (PQ poisoning is highly correlated to plasma PQ concentration, which has been identified as the most important index in PQ poisoning. This study investigated the predictive value of coagulation, liver, and kidney indices in prognosticating PQ-poisoning patients, when aligned with plasma PQ concentrations. Coagulation, liver, and kidney indices were first analyzed by variance analysis, receiver operating characteristic curves, and Fisher discriminant analysis. Then, a new, intelligent, machine learning-based system was established to effectively provide prognostic analysis of PQ-poisoning patients based on a combination of the aforementioned indices. In the proposed system, an enhanced extreme learning machine wrapped with a grey wolf-optimization strategy was developed to predict the risk status from a pool of 103 patients (56 males and 47 females; of these, 52 subjects were deceased and 51 alive. The proposed method was rigorously evaluated against this real-life dataset, in terms of accuracy, Matthews correlation coefficients, sensitivity, and specificity. Additionally, the feature selection was investigated to identify correlating factors for risk status. The results demonstrated that there were significant differences in the coagulation, liver, and kidney indices between deceased and surviving subjects (p<0.05. Aspartate aminotransferase, prothrombin time, prothrombin activity, total bilirubin, direct bilirubin, indirect bilirubin, alanine aminotransferase, urea nitrogen, and creatinine were the most highly correlated indices in PQ poisoning and showed statistical significance (p<0.05 in predicting PQ-poisoning prognoses. According to the feature selection, the most important correlated indices were found to be associated with aspartate aminotransferase, the aspartate aminotransferase to alanine ratio, creatinine, prothrombin time, and prothrombin activity. The method proposed here showed excellent results that were

  2. A new machine-learning method to prognosticate paraquat poisoned patients by combining coagulation, liver, and kidney indices.

    Science.gov (United States)

    Hu, Lufeng; Li, Huaizhong; Cai, Zhennao; Lin, Feiyan; Hong, Guangliang; Chen, Huiling; Lu, Zhongqiu

    2017-01-01

    The prognosis of paraquat (PQ) poisoning is highly correlated to plasma PQ concentration, which has been identified as the most important index in PQ poisoning. This study investigated the predictive value of coagulation, liver, and kidney indices in prognosticating PQ-poisoning patients, when aligned with plasma PQ concentrations. Coagulation, liver, and kidney indices were first analyzed by variance analysis, receiver operating characteristic curves, and Fisher discriminant analysis. Then, a new, intelligent, machine learning-based system was established to effectively provide prognostic analysis of PQ-poisoning patients based on a combination of the aforementioned indices. In the proposed system, an enhanced extreme learning machine wrapped with a grey wolf-optimization strategy was developed to predict the risk status from a pool of 103 patients (56 males and 47 females); of these, 52 subjects were deceased and 51 alive. The proposed method was rigorously evaluated against this real-life dataset, in terms of accuracy, Matthews correlation coefficients, sensitivity, and specificity. Additionally, the feature selection was investigated to identify correlating factors for risk status. The results demonstrated that there were significant differences in the coagulation, liver, and kidney indices between deceased and surviving subjects (p<0.05). Aspartate aminotransferase, prothrombin time, prothrombin activity, total bilirubin, direct bilirubin, indirect bilirubin, alanine aminotransferase, urea nitrogen, and creatinine were the most highly correlated indices in PQ poisoning and showed statistical significance (p<0.05) in predicting PQ-poisoning prognoses. According to the feature selection, the most important correlated indices were found to be associated with aspartate aminotransferase, the aspartate aminotransferase to alanine ratio, creatinine, prothrombin time, and prothrombin activity. The method proposed here showed excellent results that were better than

  3. A new machine-learning method to prognosticate paraquat poisoned patients by combining coagulation, liver, and kidney indices

    Science.gov (United States)

    Hu, Lufeng; Li, Huaizhong; Cai, Zhennao; Lin, Feiyan; Hong, Guangliang; Chen, Huiling; Lu, Zhongqiu

    2017-01-01

    The prognosis of paraquat (PQ) poisoning is highly correlated to plasma PQ concentration, which has been identified as the most important index in PQ poisoning. This study investigated the predictive value of coagulation, liver, and kidney indices in prognosticating PQ-poisoning patients, when aligned with plasma PQ concentrations. Coagulation, liver, and kidney indices were first analyzed by variance analysis, receiver operating characteristic curves, and Fisher discriminant analysis. Then, a new, intelligent, machine learning-based system was established to effectively provide prognostic analysis of PQ-poisoning patients based on a combination of the aforementioned indices. In the proposed system, an enhanced extreme learning machine wrapped with a grey wolf-optimization strategy was developed to predict the risk status from a pool of 103 patients (56 males and 47 females); of these, 52 subjects were deceased and 51 alive. The proposed method was rigorously evaluated against this real-life dataset, in terms of accuracy, Matthews correlation coefficients, sensitivity, and specificity. Additionally, the feature selection was investigated to identify correlating factors for risk status. The results demonstrated that there were significant differences in the coagulation, liver, and kidney indices between deceased and surviving subjects (p<0.05). Aspartate aminotransferase, prothrombin time, prothrombin activity, total bilirubin, direct bilirubin, indirect bilirubin, alanine aminotransferase, urea nitrogen, and creatinine were the most highly correlated indices in PQ poisoning and showed statistical significance (p<0.05) in predicting PQ-poisoning prognoses. According to the feature selection, the most important correlated indices were found to be associated with aspartate aminotransferase, the aspartate aminotransferase to alanine ratio, creatinine, prothrombin time, and prothrombin activity. The method proposed here showed excellent results that were better than

  4. Combining Human and Machine Learning to Map Cropland in the 21st Century's Major Agricultural Frontier

    Science.gov (United States)

    Estes, L. D.; Debats, S. R.; Caylor, K. K.; Evans, T. P.; Gower, D.; McRitchie, D.; Searchinger, T.; Thompson, D. R.; Wood, E. F.; Zeng, L.

    2016-12-01

    In the coming decades, large areas of new cropland will be created to meet the world's rapidly growing food demands. Much of this new cropland will be in sub-Saharan Africa, where food needs will increase most and the area of remaining potential farmland is greatest. If we are to understand the impacts of global change, it is critical to accurately identify Africa's existing croplands and how they are changing. Yet the continent's smallholder-dominated agricultural systems are unusually challenging for remote sensing analyses, making accurate area estimates difficult to obtain, let alone important details related to field size and geometry. Fortunately, the rapidly growing archives of moderate to high-resolution satellite imagery hosted on open servers now offer an unprecedented opportunity to improve landcover maps. We present a system that integrates two critical components needed to capitalize on this opportunity: 1) human image interpretation and 2) machine learning (ML). Human judgment is needed to accurately delineate training sites within noisy imagery and a highly variable cover type, while ML provides the ability to scale and to interpret large feature spaces that defy human comprehension. Because large amounts of training data are needed (a major impediment for analysts), we use a crowdsourcing platform that connects amazon.com's Mechanical Turk service to satellite imagery hosted on open image servers. Workers map visible fields at pre-assigned sites, and are paid according to their mapping accuracy. Initial tests show overall high map accuracy and mapping rates >1800 km2/hour. The ML classifier uses random forests and randomized quasi-exhaustive feature selection, and is highly effective in classifying diverse agricultural types in southern Africa (AUC > 0.9). We connect the ML and crowdsourcing components to make an interactive learning framework. The ML algorithm performs an initial classification using a first batch of crowd-sourced maps, using

  5. Combining an expert-based medical entity recognizer to a machine-learning system: methods and a case study.

    Science.gov (United States)

    Zweigenbaum, Pierre; Lavergne, Thomas; Grabar, Natalia; Hamon, Thierry; Rosset, Sophie; Grouin, Cyril

    2013-01-01

    Medical entity recognition is currently generally performed by data-driven methods based on supervised machine learning. Expert-based systems, where linguistic and domain expertise are directly provided to the system are often combined with data-driven systems. We present here a case study where an existing expert-based medical entity recognition system, Ogmios, is combined with a data-driven system, Caramba, based on a linear-chain Conditional Random Field (CRF) classifier. Our case study specifically highlights the risk of overfitting incurred by an expert-based system. We observe that it prevents the combination of the 2 systems from obtaining improvements in precision, recall, or F-measure, and analyze the underlying mechanisms through a post-hoc feature-level analysis. Wrapping the expert-based system alone as attributes input to a CRF classifier does boost its F-measure from 0.603 to 0.710, bringing it on par with the data-driven system. The generalization of this method remains to be further investigated.

  6. Computer-aided classification of Alzheimer's disease based on support vector machine with combination of cerebral image features in MRI

    Science.gov (United States)

    Jongkreangkrai, C.; Vichianin, Y.; Tocharoenchai, C.; Arimura, H.; Alzheimer's Disease Neuroimaging Initiative

    2016-03-01

    Several studies have differentiated Alzheimer's disease (AD) using cerebral image features derived from MR brain images. In this study, we were interested in combining hippocampus and amygdala volumes and entorhinal cortex thickness to improve the performance of AD differentiation. Thus, our objective was to investigate the useful features obtained from MRI for classification of AD patients using support vector machine (SVM). T1-weighted MR brain images of 100 AD patients and 100 normal subjects were processed using FreeSurfer software to measure hippocampus and amygdala volumes and entorhinal cortex thicknesses in both brain hemispheres. Relative volumes of hippocampus and amygdala were calculated to correct variation in individual head size. SVM was employed with five combinations of features (H: hippocampus relative volumes, A: amygdala relative volumes, E: entorhinal cortex thicknesses, HA: hippocampus and amygdala relative volumes and ALL: all features). Receiver operating characteristic (ROC) analysis was used to evaluate the method. AUC values of five combinations were 0.8575 (H), 0.8374 (A), 0.8422 (E), 0.8631 (HA) and 0.8906 (ALL). Although “ALL” provided the highest AUC, there were no statistically significant differences among them except for “A” feature. Our results showed that all suggested features may be feasible for computer-aided classification of AD patients.

  7. Combined pigmentary and structural effects tune wing scale coloration to color vision in the swallowtail butterfly Papilio xuthus

    OpenAIRE

    Stavenga, Doekele G; Matsushita, Atsuko; Arikawa, Kentaro

    2015-01-01

    Butterflies have well-developed color vision, presumably optimally tuned to the detection of conspecifics by their wing coloration. Here we investigated the pigmentary and structural basis of the wing colors in the Japanese yellow swallowtail butterfly, Papilio xuthus, applying spectrophotometry, scatterometry, light and electron microscopy, and optical modeling. The about flat lower lamina of the wing scales plays a crucial role in wing coloration. In the cream, orange and black scales, the ...

  8. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  9. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-05-01

    Full Text Available Protein-Protein Interactions (PPIs play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM model and Average Blocks (AB to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed

  10. Machine Learning Classification Combining Multiple Features of A Hyper-Network of fMRI Data in Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Hao Guo

    2017-11-01

    Full Text Available Exploring functional interactions among various brain regions is helpful for understanding the pathological underpinnings of neurological disorders. Brain networks provide an important representation of those functional interactions, and thus are widely applied in the diagnosis and classification of neurodegenerative diseases. Many mental disorders involve a sharp decline in cognitive ability as a major symptom, which can be caused by abnormal connectivity patterns among several brain regions. However, conventional functional connectivity networks are usually constructed based on pairwise correlations among different brain regions. This approach ignores higher-order relationships, and cannot effectively characterize the high-order interactions of many brain regions working together. Recent neuroscience research suggests that higher-order relationships between brain regions are important for brain network analysis. Hyper-networks have been proposed that can effectively represent the interactions among brain regions. However, this method extracts the local properties of brain regions as features, but ignores the global topology information, which affects the evaluation of network topology and reduces the performance of the classifier. This problem can be compensated by a subgraph feature-based method, but it is not sensitive to change in a single brain region. Considering that both of these feature extraction methods result in the loss of information, we propose a novel machine learning classification method that combines multiple features of a hyper-network based on functional magnetic resonance imaging in Alzheimer's disease. The method combines the brain region features and subgraph features, and then uses a multi-kernel SVM for classification. This retains not only the global topological information, but also the sensitivity to change in a single brain region. To certify the proposed method, 28 normal control subjects and 38 Alzheimer

  11. Machine Learning Classification Combining Multiple Features of A Hyper-Network of fMRI Data in Alzheimer's Disease.

    Science.gov (United States)

    Guo, Hao; Zhang, Fan; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    Exploring functional interactions among various brain regions is helpful for understanding the pathological underpinnings of neurological disorders. Brain networks provide an important representation of those functional interactions, and thus are widely applied in the diagnosis and classification of neurodegenerative diseases. Many mental disorders involve a sharp decline in cognitive ability as a major symptom, which can be caused by abnormal connectivity patterns among several brain regions. However, conventional functional connectivity networks are usually constructed based on pairwise correlations among different brain regions. This approach ignores higher-order relationships, and cannot effectively characterize the high-order interactions of many brain regions working together. Recent neuroscience research suggests that higher-order relationships between brain regions are important for brain network analysis. Hyper-networks have been proposed that can effectively represent the interactions among brain regions. However, this method extracts the local properties of brain regions as features, but ignores the global topology information, which affects the evaluation of network topology and reduces the performance of the classifier. This problem can be compensated by a subgraph feature-based method, but it is not sensitive to change in a single brain region. Considering that both of these feature extraction methods result in the loss of information, we propose a novel machine learning classification method that combines multiple features of a hyper-network based on functional magnetic resonance imaging in Alzheimer's disease. The method combines the brain region features and subgraph features, and then uses a multi-kernel SVM for classification. This retains not only the global topological information, but also the sensitivity to change in a single brain region. To certify the proposed method, 28 normal control subjects and 38 Alzheimer's disease patients were

  12. Machine learning a probabilistic perspective

    CERN Document Server

    Murphy, Kevin P

    2012-01-01

    Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic method...

  13. Identifying Environmental and Social Factors Predisposing to Pathological Gambling Combining Standard Logistic Regression and Logic Learning Machine.

    Science.gov (United States)

    Parodi, Stefano; Dosi, Corrado; Zambon, Antonella; Ferrari, Enrico; Muselli, Marco

    2017-12-01

    Identifying potential risk factors for problem gambling (PG) is of primary importance for planning preventive and therapeutic interventions. We illustrate a new approach based on the combination of standard logistic regression and an innovative method of supervised data mining (Logic Learning Machine or LLM). Data were taken from a pilot cross-sectional study to identify subjects with PG behaviour, assessed by two internationally validated scales (SOGS and Lie/Bet). Information was obtained from 251 gamblers recruited in six betting establishments. Data on socio-demographic characteristics, lifestyle and cognitive-related factors, and type, place and frequency of preferred gambling were obtained by a self-administered questionnaire. The following variables associated with PG were identified: instant gratification games, alcohol abuse, cognitive distortion, illegal behaviours and having started gambling with a relative or a friend. Furthermore, the combination of LLM and LR indicated the presence of two different types of PG, namely: (a) daily gamblers, more prone to illegal behaviour, with poor money management skills and who started gambling at an early age, and (b) non-daily gamblers, characterised by superstitious beliefs and a higher preference for immediate reward games. Finally, instant gratification games were strongly associated with the number of games usually played. Studies on gamblers habitually frequently betting shops are rare. The finding of different types of PG by habitual gamblers deserves further analysis in larger studies. Advanced data mining algorithms, like LLM, are powerful tools and potentially useful in identifying risk factors for PG.

  14. Predicting species cover of marine macrophyte and invertebrate species combining hyperspectral remote sensing, machine learning and regression techniques.

    Directory of Open Access Journals (Sweden)

    Jonne Kotta

    Full Text Available In order to understand biotic patterns and their changes in nature there is an obvious need for high-quality seamless measurements of such patterns. If remote sensing methods have been applied with reasonable success in terrestrial environment, their use in aquatic ecosystems still remained challenging. In the present study we combined hyperspectral remote sensing and boosted regression tree modelling (BTR, an ensemble method for statistical techniques and machine learning, in order to test their applicability in predicting macrophyte and invertebrate species cover in the optically complex seawater of the Baltic Sea. The BRT technique combined with remote sensing and traditional spatial modelling succeeded in identifying, constructing and testing functionality of abiotic environmental predictors on the coverage of benthic macrophyte and invertebrate species. Our models easily predicted a large quantity of macrophyte and invertebrate species cover and recaptured multitude of interactions between environment and biota indicating a strong potential of the method in the modelling of aquatic species in the large variety of ecosystems.

  15. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  16. Machine Learning Approach for Classifying Multiple Sclerosis Courses by Combining Clinical Data with Lesion Loads and Magnetic Resonance Metabolic Features.

    Science.gov (United States)

    Ion-Mărgineanu, Adrian; Kocevar, Gabriel; Stamile, Claudio; Sima, Diana M; Durand-Dubief, Françoise; Van Huffel, Sabine; Sappey-Marinier, Dominique

    2017-01-01

    Purpose: The purpose of this study is classifying multiple sclerosis (MS) patients in the four clinical forms as defined by the McDonald criteria using machine learning algorithms trained on clinical data combined with lesion loads and magnetic resonance metabolic features. Materials and Methods: Eighty-seven MS patients [12 Clinically Isolated Syndrome (CIS), 30 Relapse Remitting (RR), 17 Primary Progressive (PP), and 28 Secondary Progressive (SP)] and 18 healthy controls were included in this study. Longitudinal data available for each MS patient included clinical (e.g., age, disease duration, Expanded Disability Status Scale), conventional magnetic resonance imaging and spectroscopic imaging. We extract N-acetyl-aspartate (NAA), Choline (Cho), and Creatine (Cre) concentrations, and we compute three features for each spectroscopic grid by averaging metabolite ratios (NAA/Cho, NAA/Cre, Cho/Cre) over good quality voxels. We built linear mixed-effects models to test for statistically significant differences between MS forms. We test nine binary classification tasks on clinical data, lesion loads, and metabolic features, using a leave-one-patient-out cross-validation method based on 100 random patient-based bootstrap selections. We compute F1-scores and BAR values after tuning Linear Discriminant Analysis (LDA), Support Vector Machines with gaussian kernel (SVM-rbf), and Random Forests. Results: Statistically significant differences were found between the disease starting points of each MS form using four different response variables: Lesion Load, NAA/Cre, NAA/Cho, and Cho/Cre ratios. Training SVM-rbf on clinical and lesion loads yields F1-scores of 71-72% for CIS vs. RR and CIS vs. RR+SP, respectively. For RR vs. PP we obtained good classification results (maximum F1-score of 85%) after training LDA on clinical and metabolic features, while for RR vs. SP we obtained slightly higher classification results (maximum F1-score of 87%) after training LDA and SVM-rbf on

  17. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  18. Vision Screening

    Science.gov (United States)

    ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ... Corneal Abrasions Dilating Eye Drops Lazy eye (defined) Pink eye (defined) Retinopathy of Prematurity Strabismus Stye (defined) Vision ...

  19. Combination of mass spectrometry-based targeted lipidomics and supervised machine learning algorithms in detecting adulterated admixtures of white rice.

    Science.gov (United States)

    Lim, Dong Kyu; Long, Nguyen Phuoc; Mo, Changyeun; Dong, Ziyuan; Cui, Lingmei; Kim, Giyoung; Kwon, Sung Won

    2017-10-01

    The mixing of extraneous ingredients with original products is a common adulteration practice in food and herbal medicines. In particular, authenticity of white rice and its corresponding blended products has become a key issue in food industry. Accordingly, our current study aimed to develop and evaluate a novel discrimination method by combining targeted lipidomics with powerful supervised learning methods, and eventually introduce a platform to verify the authenticity of white rice. A total of 30 cultivars were collected, and 330 representative samples of white rice from Korea and China as well as seven mixing ratios were examined. Random forests (RF), support vector machines (SVM) with a radial basis function kernel, C5.0, model averaged neural network, and k-nearest neighbor classifiers were used for the classification. We achieved desired results, and the classifiers effectively differentiated white rice from Korea to blended samples with high prediction accuracy for the contamination ratio as low as five percent. In addition, RF and SVM classifiers were generally superior to and more robust than the other techniques. Our approach demonstrated that the relative differences in lysoGPLs can be successfully utilized to detect the adulterated mixing of white rice originating from different countries. In conclusion, the present study introduces a novel and high-throughput platform that can be applied to authenticate adulterated admixtures from original white rice samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Modeling the adsorption of PAH mixture in silica nanopores by molecular dynamic simulation combined with machine learning.

    Science.gov (United States)

    Sui, Hong; Li, Lin; Zhu, Xinzhe; Chen, Daoyi; Wu, Guozhong

    2016-02-01

    The persistence of polycyclic aromatic hydrocarbons (PAHs) in contaminated soils is largely controlled by their molecular fate in soil pores. The adsorption and diffusion of 16 PAHs mixture in silica nanopore with diameter of 2.0, 2.5, 3.0 and 3.5 nm, respectively, were characterized by adsorption energy, mean square displacement, free surface area and free volume fraction using molecular dynamic (MD) simulation. Results suggested that PAHs adsorption in silica nanopores was associated with diffusion process while competitive sorption was not the dominant mechanism in context of this study. The partial least squares (PLS) regression and machine learning (ML) methods (i.e. support vector regression, M5 decision tree and multilayer perceptrons) were used to correlate the adsorption energy with the pore diameter and PAH properties (number of carbon atoms, aromatic ring number, boiling point, molecular weight, octanol-water partition coefficient, octanol-organic carbon partition coefficient, solvent accessible area, solvent accessible volume and polarization). Results indicated that the PAH adsorption could not be predicted by linear regression as the R(2)Y and Q(2)Y coefficients of PLS analysis was 0.375 and 0.199, respectively. The nonlinearity was well recognized by ML with correlation coefficient up to 0.9. Overall, the combination of MD simulation and ML approaches can assist in interpreting the sequestration of organic contaminants in the soil nanopores. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

    Science.gov (United States)

    Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X. L.

    2016-01-01

    This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.

  2. hERG classification model based on a combination of support vector machine method and GRIND descriptors

    DEFF Research Database (Denmark)

    Li, Qiyuan; Jorgensen, Flemming Steen; Oprea, Tudor

    2008-01-01

    invest substantial effort in the assessment of cardiac toxicity of drugs. The development of in silico tools to filter out potential hERG channel inhibitors in earlystages of the drug discovery process is of considerable interest. Here, we describe binary classification models based on a large...... and diverse library of 495 compounds. The models combine pharmacophore-based GRIND descriptors with a support vector machine (SVM) classifier in order to discriminate between hERG blockers and nonblockers. Our models were applied at different thresholds from 1 to 40 mu m and achieved an overall accuracy up...... to 94% with a Matthews coefficient correlation (MCC) of 0.86 (F-measure of 0.90 for blockers and 0.95 for nonblockers). The model at a 40 urn threshold showed the best performance and was validated internally (MCC of 0.40 and F-measure of 0.57 for blockers and 0.81 for nonblockers, using a leave...

  3. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  4. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  5. Predictive Toxicology: Modeling Chemical Induced Toxicological Response Combining Circular Fingerprints with Random Forest and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Alexios eKoutsoukas

    2016-03-01

    Full Text Available Modern drug discovery and toxicological research are under pressure, as the cost of developing and testing new chemicals for potential toxicological risk is rising. Extensive evaluation of chemical products for potential adverse effects is a challenging task, due to the large number of chemicals and the possible hazardous effects on human health. Safety regulatory agencies around the world are dealing with two major challenges. First, the growth of chemicals introduced every year in household products and medicines that need to be tested, and second the need to protect public welfare. Hence, alternative and more efficient toxicological risk assessment methods are in high demand. The Toxicology in the 21st Century (Tox21 consortium a collaborative effort was formed to develop and investigate alternative assessment methods. A collection of 10,000 compounds composed of environmental chemicals and approved drugs were screened for interference in biochemical pathways and released for crowdsourcing data analysis. The physicochemical space covered by Tox21 library was explored, measured by Molecular Weight (MW and the octanol/water partition coefficient (cLogP. It was found that on average chemical structures had MW of 272.6 Daltons. In case of cLogP the average value was 2.476. Next relationships between assays were examined based on compounds activity profiles across the assays utilizing the Pearson correlation coefficient r. A cluster was observed between the Androgen and Estrogen Receptors and their ligand bind domains accordingly indicating presence of cross talks among the receptors. The highest correlations observed were between NR.AR and NR.AR_LBD, where it was r=0.66 and between NR.ER and NR.ER_LBD, where it was r=0.5.Our approach to model the Tox21 data consisted of utilizing circular molecular fingerprints combined with Random Forest and Support Vector Machine by modeling each assay independently. In all of the 12 sub-challenges our modeling

  6. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  7. NWT-02, a fixed combination of lutein, zeaxanthin and docosahexaenoic acid in egg yolk and reduction of the loss of vision: evaluation of a health claim pursuant to Article 13(5) of Regulation (EC) No 1924/2006

    DEFF Research Database (Denmark)

    Sjödin, Anders Mikael

    2018-01-01

    (DHA) (≥ 170 mg). The Panel considers that the food/constituent that is the subject of the health claim, NWT-02, a fixed combination of lutein, zeaxanthin and docosahexaenoic acid in egg yolk, is sufficiently characterised. The claimed effect proposed by the applicant is ‘reduces loss of vision...... and docosahexaenoic acid in egg yolk, and a reduction of the loss of vision....

  8. The combination of a histogram-based clustering algorithm and support vector machine for the diagnosis of osteoporosis

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Min Suk; Kavitha, Muthu Subash [Dept. of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Asano, Akira [Graduate School of Engineering, Hiroshima University, Hiroshima (Japan); Taguchi, Akira [Dept. of Oral and Maxillofacial Radiology, Matsumoto Dental University, Nagano (Japan)

    2013-09-15

    To prevent low bone mineral density (BMD), that is, osteoporosis, in postmenopausal women, it is essential to diagnose osteoporosis more precisely. This study presented an automatic approach utilizing a histogram-based automatic clustering (HAC) algorithm with a support vector machine (SVM) to analyse dental panoramic radiographs (DPRs) and thus improve diagnostic accuracy by identifying postmenopausal women with low BMD or osteoporosis. We integrated our newly-proposed histogram-based automatic clustering (HAC) algorithm with our previously-designed computer-aided diagnosis system. The extracted moment-based features (mean, variance, skewness, and kurtosis) of the mandibular cortical width for the radial basis function (RBF) SVM classifier were employed. We also compared the diagnostic efficacy of the SVM model with the back propagation (BP) neural network model. In this study, DPRs and BMD measurements of 100 postmenopausal women patients (aged >50 years), with no previous record of osteoporosis, were randomly selected for inclusion. The accuracy, sensitivity, and specificity of the BMD measurements using our HAC-SVM model to identify women with low BMD were 93.0% (88.0%-98.0%), 95.8% (91.9%-99.7%) and 86.6% (79.9%-93.3%), respectively, at the lumbar spine; and 89.0% (82.9%-95.1%), 96.0% (92.2%-99.8%) and 84.0% (76.8%-91.2%), respectively, at the femoral neck. Our experimental results predict that the proposed HAC-SVM model combination applied on DPRs could be useful to assist dentists in early diagnosis and help to reduce the morbidity and mortality associated with low BMD and osteoporosis.

  9. A real-time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design.

    Science.gov (United States)

    Shanechi, Maryam M; Williams, Ziv M; Wornell, Gregory W; Hu, Rollin C; Powers, Marissa; Brown, Emery N

    2013-01-01

    Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system.

  10. A real-time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design.

    Directory of Open Access Journals (Sweden)

    Maryam M Shanechi

    Full Text Available Real-time brain-machine interfaces (BMI have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system.

  11. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  12. Predicting Species Cover of Marine Macrophyte and Invertebrate Species Combining Hyperspectral Remote Sensing, Machine Learning and Regression Techniques

    National Research Council Canada - National Science Library

    Kotta, Jonne; Kutser, Tiit; Teeveer, Karolin; Vahtmäe, Ele; Pärnoja, Merli

    2013-01-01

    ...), an ensemble method for statistical techniques and machine learning, in order to test their applicability in predicting macrophyte and invertebrate species cover in the optically complex seawater of the Baltic Sea...

  13. Predicting species cover of marine macrophyte and invertebrate species combining hyperspectral remote sensing, machine learning and regression techniques

    National Research Council Canada - National Science Library

    Kotta, Jonne; Kutser, Tiit; Teeveer, Karolin; Vahtmäe, Ele; Pärnoja, Merli

    2014-01-01

    ...), an ensemble method for statistical techniques and machine learning, in order to test their applicability in predicting macrophyte and invertebrate species cover in the optically complex seawater of the Baltic Sea...

  14. Neuro-vector-based electrical machine driver combining a neural plant identifier and a conventional vector controller

    Science.gov (United States)

    Madani, Kurosh; Mercier, Gilles; Dinarvand, Mohammad; Depecker, Jean-Charles

    1999-03-01

    One of the most important problems, for a machine control process is the system identification. To identify varying parameters which are dependent from other system's parameters (speed, voltage and currents, etc.), one must have an adaptive control system. Synchronous machines conventional vector control's implementation using PID controllers have been recently proposed presenting the best actual solution. It supposes an appropriated model of the plant. But real plant's parameters vary and the P.I.D. controller is not suitable because of the parameters variation and non-linearity introduced by the machine's physical structure. In this paper, we present an on-line dynamic adaptive neural based vector control system identifying the motor's parameters of a synchronous machine. We present and discuss a DSP based real- time implementation of our adaptive neuro-controller. Simulation and experimental results validating our approach have been reported.

  15. What Do We Really Need? Visions of an Ideal Human-Machine Interface for NOTES Mechatronic Support Systems From the View of Surgeons, Gastroenterologists, and Medical Engineers.

    Science.gov (United States)

    Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus

    2015-08-01

    To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.

  16. Research on Three-dimensional Motion History Image Model and Extreme Learning Machine for Human Body Movement Trajectory Recognition

    Directory of Open Access Journals (Sweden)

    Zheng Chang

    2015-01-01

    Full Text Available Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements and the extreme learning machine (constructed as the classification artificial neural network of body movements, the paper applies the method to the machine vision of the body movement trajectory. In detail, the paper gives a detailed introduction about the algorithm and realization scheme of the body movement trajectory recognition based on the three-dimensional motion history image and the extreme learning machine. Finally, by comparing with the results of the recognition experiments, it attempts to verify that the method of body movement trajectory recognition technology based on the three-dimensional motion history image and extreme learning machine has a more accurate recognition rate and better robustness.

  17. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NBSW=NBBI-GRAM=SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we

  18. Research on Three-dimensional Motion History Image Model and Extreme Learning Machine for Human Body Movement Trajectory Recognition

    OpenAIRE

    Zheng Chang; Xiaojuan Ban; Qing Shen; Jing Guo

    2015-01-01

    Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements) and the extreme learning machine (constructed as the classification artificial neural network of body movements), the paper applies the method to the...

  19. Development of a 3D Parallel Mechanism Robot Arm with Three Vertical-Axial Pneumatic Actuators Combined with a Stereo Vision System

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2011-12-01

    Full Text Available This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end

  20. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  1. Artificial Vision, New Visual Modalities and Neuroadaptation

    OpenAIRE

    Hilmi Or

    2012-01-01

    To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known...

  2. DNABind: a hybrid algorithm for structure-based prediction of DNA-binding residues by combining machine learning- and template-based approaches.

    Science.gov (United States)

    Liu, Rong; Hu, Jianjun

    2013-11-01

    Accurate prediction of DNA-binding residues has become a problem of increasing importance in structural bioinformatics. Here, we presented DNABind, a novel hybrid algorithm for identifying these crucial residues by exploiting the complementarity between machine learning- and template-based methods. Our machine learning-based method was based on the probabilistic combination of a structure-based and a sequence-based predictor, both of which were implemented using support vector machines algorithms. The former included our well-designed structural features, such as solvent accessibility, local geometry, topological features, and relative positions, which can effectively quantify the difference between DNA-binding and nonbinding residues. The latter combined evolutionary conservation features with three other sequence attributes. Our template-based method depended on structural alignment and utilized the template structure from known protein-DNA complexes to infer DNA-binding residues. We showed that the template method had excellent performance when reliable templates were found for the query proteins but tended to be strongly influenced by the template quality as well as the conformational changes upon DNA binding. In contrast, the machine learning approach yielded better performance when high-quality templates were not available (about 1/3 cases in our dataset) or the query protein was subject to intensive transformation changes upon DNA binding. Our extensive experiments indicated that the hybrid approach can distinctly improve the performance of the individual methods for both bound and unbound structures. DNABind also significantly outperformed the state-of-art algorithms by around 10% in terms of Matthews's correlation coefficient. The proposed methodology could also have wide application in various protein functional site annotations. DNABind is freely available at http://mleg.cse.sc.edu/DNABind/. Copyright © 2013 Wiley Periodicals, Inc.

  3. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  4. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  5. The vision trap.

    Science.gov (United States)

    Langeler, G H

    1992-01-01

    At Mentor Graphics Corporation, Gerry Langeler was the executive responsible for vision, and vision, he discovered, has the power to weaken a strong company. Mentor helped to invent design-automation electronics in the early 1980s, and by the end of the decade, it dominated the industry. In its early days, fighting to survive, Mentor's motto was Build Something People Will Buy. Then when clear competition emerged in the form of Daisy Systems, a startup that initially outsold Mentor, the watchword became Beat Daisy. Both "visions" were pragmatic and immediate. They gave Mentor a sense of purpose as it developed its products and gathered momentum. Once Daisy was beaten, however, company vision began to self-inflate. As Mentor grew more and more successful, Langeler formulated vision statements that were more and more ambitious, grand, and inspirational. The company traded its gritty determination to survive for a dream of future glory. The once explicit call for effective action became a fervid cry for abstract perfection. The first step was Six Boxes, a transitional vision that combined goals for success in six business areas with grandiose plans to compete with IBM at the level of billion-dollar revenues. From there, vision stepped up to the 10X Imperative, a quality-improvement program that focused on arbitrary goals and measures that were, in fact, beyond the company's control. The last escalation came when Mentor Graphics decided to Change the Way the World Designs. The company had stopped making product and was making poetry. Finally, in 1991, after six years of increasing self-infatuation, Mentor hit a wall of decreasing indicators. Langeler, who had long since begun to doubt the value of abstract visions, reinstated Build Something People Will Buy. And Mentor was back to basics, a sense of purpose back to its workplace.

  6. All Vision Impairment

    Science.gov (United States)

    ... Statistics and Data > All Vision Impairment All Vision Impairment Vision Impairment Defined Vision impairment is defined as the best- ... 2010 U.S. Age-Specific Prevalence Rates for Vision Impairment by Age and Race/Ethnicity Table for 2010 ...

  7. Synthesis of a pH-Sensitive Hetero[4]Rotaxane Molecular Machine that Combines [c2]Daisy and [2]Rotaxane Arrangements.

    Science.gov (United States)

    Waelès, Philip; Riss-Yaw, Benjamin; Coutrot, Frédéric

    2016-05-10

    The synthesis of a novel pH-sensitive hetero[4]rotaxane molecular machine through a self-sorting strategy is reported. The original tetra-interlocked molecular architecture combines a [c2]daisy chain scaffold linked to two [2]rotaxane units. Actuation of the system through pH variation is possible thanks to the specific interactions of the dibenzo-24-crown-8 (DB24C8) macrocycles for ammonium, anilinium, and triazolium molecular stations. Selective deprotonation of the anilinium moieties triggers shuttling of the unsubstituted DB24C8 along the [2]rotaxane units. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A Combination of Machine Learning and Cerebellar Models for the Motor Control and Learning of a Modular Robot

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Pacheco, Moises

    2017-01-01

    We scaled up a bio-inspired control architecture for the motor control and motor learning of a real modular robot. In our approach, the Locally Weighted Projection Regression algorithm (LWPR) and a cerebellar microcircuit coexist, forming a Unit Learning Machine. The LWPR optimizes the input space...... and learns the internal model of a single robot module to command the robot to follow a desired trajectory with its end-effector. The cerebellar microcircuit refines the LWPR output delivering corrective commands. We contrasted distinct cerebellar circuits including analytical models and spiking models...

  9. Quality Control by Artificial Vision

    Energy Technology Data Exchange (ETDEWEB)

    Lam, Edmond Y. [University of Hong Kong, The; Gleason, Shaun Scott [ORNL; Niel, Kurt S. [Upper Austria University of Applied Science, Engineering and Environmental Studies

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papers relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier

  10. Fullerene Machines

    Science.gov (United States)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Fullerenes possess remarkable properties and many investigators have examined the mechanical, electronic and other characteristics of carbon SP2 systems in some detail. In addition, C-60 can be functionalized with many classes of molecular fragments and we may expect the caps of carbon nanotubes to have a similar chemistry. Finally, carbon nanotubes have been attached to t he end of scanning probe microscope (Spill) tips. Spills can be manipulated with sub-angstrom accuracy. Together, these investigations suggest that complex molecular machines made of fullerenes may someday be created and manipulated with very high accuracy. We have studied some such systems computationally (primarily functionalized carbon nanotube gears and computer components). If such machines can be combined appropriately, a class of materials may be created that can sense their environment, calculate a response, and act. The implications of such hypothetical materials are substantial.

  11. Machine-learning-based diagnosis of schizophrenia using combined sensor-level and source-level EEG features.

    Science.gov (United States)

    Shim, Miseon; Hwang, Han-Jeong; Kim, Do-Won; Lee, Seung-Hwan; Im, Chang-Hwan

    2016-10-01

    Recently, an increasing number of researchers have endeavored to develop practical tools for diagnosing patients with schizophrenia using machine learning techniques applied to EEG biomarkers. Although a number of studies showed that source-level EEG features can potentially be applied to the differential diagnosis of schizophrenia, most studies have used only sensor-level EEG features such as ERP peak amplitude and power spectrum for machine learning-based diagnosis of schizophrenia. In this study, we used both sensor-level and source-level features extracted from EEG signals recorded during an auditory oddball task for the classification of patients with schizophrenia and healthy controls. EEG signals were recorded from 34 patients with schizophrenia and 34 healthy controls while each subject was asked to attend to oddball tones. Our results demonstrated higher classification accuracy when source-level features were used together with sensor-level features, compared to when only sensor-level features were used. In addition, the selected sensor-level features were mostly found in the frontal area, and the selected source-level features were mostly extracted from the temporal area, which coincide well with the well-known pathological region of cognitive processing in patients with schizophrenia. Our results suggest that our approach would be a promising tool for the computer-aided diagnosis of schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Machine Learning Using Combined Structural and Chemical Descriptors for Prediction of Methane Adsorption Performance of Metal Organic Frameworks (MOFs).

    Science.gov (United States)

    Pardakhti, Maryam; Moharreri, Ehsan; Wanik, David; Suib, Steven L; Srivastava, Ranjan

    2017-10-09

    Using molecular simulation for adsorbent screening is computationally expensive and thus prohibitive to materials discovery. Machine learning (ML) algorithms trained on fundamental material properties can potentially provide quick and accurate methods for screening purposes. Prior efforts have focused on structural descriptors for use with ML. In this work, the use of chemical descriptors, in addition to structural descriptors, was introduced for adsorption analysis. Evaluation of structural and chemical descriptors coupled with various ML algorithms, including decision tree, Poisson regression, support vector machine and random forest, were carried out to predict methane uptake on hypothetical metal organic frameworks. To highlight their predictive capabilities, ML models were trained on 8% of a data set consisting of 130,398 MOFs and then tested on the remaining 92% to predict methane adsorption capacities. When structural and chemical descriptors were jointly used as ML input, the random forest model with 10-fold cross validation proved to be superior to the other ML approaches, with an R 2 of 0.98 and a mean absolute percent error of about 7%. The training and prediction using the random forest algorithm for adsorption capacity estimation of all 130,398 MOFs took approximately 2 h on a single personal computer, several orders of magnitude faster than actual molecular simulations on high-performance computing clusters.

  13. Monitoring mangrove biomass change in Vietnam using SPOT images and an object-based approach combined with machine learning algorithms

    Science.gov (United States)

    Pham, Lien T. H.; Brabyn, Lars

    2017-06-01

    Mangrove forests are well-known for their provision of ecosystem services and capacity to reduce carbon dioxide concentrations in the atmosphere. Mapping and quantifying mangrove biomass is useful for the effective management of these forests and maximizing their ecosystem service performance. The objectives of this research were to model, map, and analyse the biomass change between 2000 and 2011 of mangrove forests in the Cangio region in Vietnam. SPOT 4 and 5 images were used in conjunction with object-based image analysis and machine learning algorithms. The study area included natural and planted mangroves of diverse species. After image preparation, three different mangrove associations were identified using two levels of image segmentation followed by a Support Vector Machine classifier and a range of spectral, texture and GIS information for classification. The overall classification accuracy for the 2000 and 2011 images were 77.1% and 82.9%, respectively. Random Forest regression algorithms were then used for modelling and mapping biomass. The model that integrated spectral, vegetation association type, texture, and vegetation indices obtained the highest accuracy (R2adj = 0.73). Among the different variables, vegetation association type was the most important variable identified by the Random Forest model. Based on the biomass maps generated from the Random Forest, total biomass in the Cangio mangrove forest increased by 820,136 tons over this period, although this change varied between the three different mangrove associations.

  14. Telescopic vision contact lens

    Science.gov (United States)

    Tremblay, Eric J.; Beer, R. Dirk; Arianpour, Ashkan; Ford, Joseph E.

    2011-03-01

    We present the concept, optical design, and first proof of principle experimental results for a telescopic contact lens intended to become a visual aid for age-related macular degeneration (AMD), providing magnification to the user without surgery or external head-mounted optics. Our contact lens optical system can provide a combination of telescopic and non-magnified vision through two independent optical paths through the contact lens. The magnified optical path incorporates a telescopic arrangement of positive and negative annular concentric reflectors to achieve 2.8x - 3x magnification on the eye, while light passing through a central clear aperture provides unmagnified vision.

  15. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  16. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  17. Stable Isotope Ratio and Elemental Profile Combined with Support Vector Machine for Provenance Discrimination of Oolong Tea (Wuyi-Rock Tea).

    Science.gov (United States)

    Lou, Yun-Xiao; Fu, Xian-Shu; Yu, Xiao-Ping; Ye, Zi-Hong; Cui, Hai-Feng; Zhang, Ya-Fen

    2017-01-01

    This paper focused on an effective method to discriminate the geographical origin of Wuyi-Rock tea by the stable isotope ratio (SIR) and metallic element profiling (MEP) combined with support vector machine (SVM) analysis. Wuyi-Rock tea (n = 99) collected from nine producing areas and non-Wuyi-Rock tea (n = 33) from eleven nonproducing areas were analysed for SIR and MEP by established methods. The SVM model based on coupled data produced the best prediction accuracy (0.9773). This prediction shows that instrumental methods combined with a classification model can provide an effective and stable tool for provenance discrimination. Moreover, every feature variable in stable isotope and metallic element data was ranked by its contribution to the model. The results show that δ(2)H, δ(18)O, Cs, Cu, Ca, and Rb contents are significant indications for provenance discrimination and not all of the metallic elements improve the prediction accuracy of the SVM model.

  18. Machine vision inspection of railroad track

    Science.gov (United States)

    2011-01-10

    North American Railways and the United States Department of Transportation : (US DOT) Federal Railroad Administration (FRA) require periodic inspection of railway : infrastructure to ensure the safety of railway operation. This inspection is a critic...

  19. Close range photogrammetry and machine vision

    CERN Document Server

    Atkinson, KB

    1996-01-01

    This book presents the methodology, algorithms, techniques and equipment necessary to achieve real time digital photogrammetric solutions, together with contemporary examples of close range photogrammetry.

  20. Machine Design and Vision Based Navigation

    OpenAIRE

    Gautam, Samrat

    2014-01-01

    This study covers the design of an autonomous robot and its testing process on an artificial maize field constructed for an indoor environment. However, the ultimate goal of this project was to participate in the Field Robot Event 2014 organized by the University of Hohenheim in Germany. This project was commissioned by HAMK University of Applied Sciences. And was fabricated and tested in the automation laboratory of HAMK UAS valkeakoski. The test result obtained by plott...

  1. Machine-vision based optofluidic cell sorting

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew

    In contemporary life science there is an increasing emphasis on sorting rare disease-indicating cells within small dilute quantities such as in the confines of optofluidic lab-on-chip devices. Our approach to this is based on the use of optical forces to isolate red blood cells detected by advanc...... the available light and creating 2D or 3D beam distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the laser catapulted and sorted cells....

  2. Identification of Fungi by Machine Vision

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael

    1999-01-01

    This paper presents some methods for identification and classification of fungal colonies into species solely by means of digital image analysis without any additinal chemical analysis needed. The methods described are completly automated hence objective once a digital image of the fungus has bee...

  3. Beef quality grading using machine vision

    Science.gov (United States)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  4. Agrarian Visions.

    Science.gov (United States)

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to…

  5. Identification of novel plant peroxisomal targeting signals by a combination of machine learning methods and in vivo subcellular targeting analyses.

    Science.gov (United States)

    Lingner, Thomas; Kataya, Amr R; Antonicelli, Gerardo E; Benichou, Aline; Nilssen, Kjersti; Chen, Xiong-Yan; Siemsen, Tanja; Morgenstern, Burkhard; Meinicke, Peter; Reumann, Sigrun

    2011-04-01

    In the postgenomic era, accurate prediction tools are essential for identification of the proteomes of cell organelles. Prediction methods have been developed for peroxisome-targeted proteins in animals and fungi but are missing specifically for plants. For development of a predictor for plant proteins carrying peroxisome targeting signals type 1 (PTS1), we assembled more than 2500 homologous plant sequences, mainly from EST databases. We applied a discriminative machine learning approach to derive two different prediction methods, both of which showed high prediction accuracy and recognized specific targeting-enhancing patterns in the regions upstream of the PTS1 tripeptides. Upon application of these methods to the Arabidopsis thaliana genome, 392 gene models were predicted to be peroxisome targeted. These predictions were extensively tested in vivo, resulting in a high experimental verification rate of Arabidopsis proteins previously not known to be peroxisomal. The prediction methods were able to correctly infer novel PTS1 tripeptides, which even included novel residues. Twenty-three newly predicted PTS1 tripeptides were experimentally confirmed, and a high variability of the plant PTS1 motif was discovered. These prediction methods will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants.

  6. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    Science.gov (United States)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  7. Classification of wines produced in specific regions by UV-visible spectroscopy combined with support vector machines.

    Science.gov (United States)

    Acevedo, F Javier; Jiménez, Javier; Maldonado, Saturnino; Domínguez, Elena; Narváez, Arántzazu

    2007-08-22

    Discriminating wines according to their denomination of origin using cost-effective techniques is something that attracts the attention of different industrial sectors. In search of simplicity, direct UV-visible spectrophotometric techniques and different multivariate statistical techniques are used with admissible results to characterize wine produced in specific regions. However, most of the reported classification methods do not exploit all of the statistical relations in the investigated dataset and are inherently affected by the presence of outliers. The aim of this paper is to test novel classification methods such as support vector machines as a means of improving the classification rate when UV-visible spectrophotometric methods are used to discriminate wines. The advantages of such a discrimination tool are demonstrated when classification rates are compared for a large number of Spanish red and white wines and classification rates above 96% are achieved. The proposed methodology also enables the selection of the most relevant wavelengths for sample discrimination. The proposed methodology also enables the selection of the most relevant wavelengths for sample discrimination.

  8. Healthy Vision Tips

    Science.gov (United States)

    ... NEI for Kids > Healthy Vision Tips All About Vision About the Eye Ask a Scientist Video Series ... Links to More Information Optical Illusions Printables Healthy Vision Tips Healthy vision starts with you! Use these ...

  9. Kids' Quest: Vision Impairment

    Science.gov (United States)

    ... Fact Check Up Tourette Questions I Have Vision Impairment Quest Vision Fact Check Up Vision Questions I ... Tweet Share Compartir What should you know? Vision impairment means that a person’s eyesight cannot be corrected ...

  10. Virtual screening approach to identifying influenza virus neuraminidase inhibitors using molecular docking combined with machine-learning-based scoring function.

    Science.gov (United States)

    Zhang, Li; Ai, Hai-Xin; Li, Shi-Meng; Qi, Meng-Yuan; Zhao, Jian; Zhao, Qi; Liu, Hong-Sheng

    2017-10-10

    In recent years, an epidemic of the highly pathogenic avian influenza H7N9 virus has persisted in China, with a high mortality rate. To develop novel anti-influenza therapies, we have constructed a machine-learning-based scoring function (RF-NA-Score) for the effective virtual screening of lead compounds targeting the viral neuraminidase (NA) protein. RF-NA-Score is more accurate than RF-Score, with a root-mean-square error of 1.46, Pearson's correlation coefficient of 0.707, and Spearman's rank correlation coefficient of 0.707 in a 5-fold cross-validation study. The performance of RF-NA-Score in a docking-based virtual screening of NA inhibitors was evaluated with a dataset containing 281 NA inhibitors and 322 noninhibitors. Compared with other docking-rescoring virtual screening strategies, rescoring with RF-NA-Score significantly improved the efficiency of virtual screening, and a strategy that averaged the scores given by RF-NA-Score, based on the binding conformations predicted with AutoDock, AutoDock Vina, and LeDock, was shown to be the best strategy. This strategy was then applied to the virtual screening of NA inhibitors in the SPECS database. The 100 selected compounds were tested in an in vitro H7N9 NA inhibition assay, and two compounds with novel scaffolds showed moderate inhibitory activities. These results indicate that RF-NA-Score improves the efficiency of virtual screening for NA inhibitors, and can be used successfully to identify new NA inhibitor scaffolds. Scoring functions specific for other drug targets could also be established with the same method.

  11. Hyperspectral Imaging and Support Vector Machine: A Powerful Combination to Differentiate Black Cohosh (Actaea racemosa) from Other Cohosh Species.

    Science.gov (United States)

    Tankeu, Sidonie; Vermaak, Ilze; Chen, Weiyang; Sandasi, Maxleene; Kamatou, Guy; Viljoen, Alvaro

    2017-10-06

    Actaea racemosa (black cohosh) has a history of traditional use in the treatment of general gynecological problems. However, the plant is known to be vulnerable to adulteration with other cohosh species. This study evaluated the use of shortwave infrared hyperspectral imaging (SWIR-HSI) in tandem with chemometric data analysis as a fast alternative method for the discrimination of four cohosh species (Actaea racemosa, Actaea podocarpa, Actaea pachypoda, Actaea cimicifuga) and 36 commercial products labelled as black cohosh. The raw material and commercial products were analyzed using SWIR-HSI and ultra-high-performance liquid chromatography coupled to mass spectrometry (UHPLC-MS) followed by chemometric modeling. From SWIR-HSI data (920 - 2514 nm), the range containing the discriminating information of the four species was identified as 1204 - 1480 nm using Matlab software. After reduction of the data set range, partial least squares discriminant analysis (PLS-DA) and support vector machine discriminant analysis (SVM-DA) models with coefficients of determination (R2 ) of ≥ 0.8 were created. The novel SVM-DA model showed better predictions and was used to predict the commercial product content. Seven out of 36 commercial products were recognized by the SVM-DA model as being true black cohosh while 29 products indicated adulteration. Analysis of the UHPLC-MS data demonstrated that six commercial products could be authentic black cohosh. This was confirmed using the fragmentation patterns of three black cohosh markers (cimiracemoside C; 12-β,21-dihydroxycimigenol-3-O-L-arabinoside; and 24-O-acetylhydroshengmanol-3-O-β-D-xylopyranoside). SWIR-HSI in conjunction with chemometric tools (SVM-DA) could identify 80% adulteration of commercial products labelled as black cohosh. Georg Thieme Verlag KG Stuttgart · New York.

  12. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  13. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  14. Pleiades Visions

    Science.gov (United States)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  15. Combined impairments in vision, hearing and cognition are associated with greater levels of functional and communication difficulties than cognitive impairment alone: Analysis of interRAI data for home care and long-term care recipients in Ontario.

    Science.gov (United States)

    Guthrie, Dawn M; Davidson, Jacob G S; Williams, Nicole; Campos, Jennifer; Hunter, Kathleen; Mick, Paul; Orange, Joseph B; Pichora-Fuller, M Kathleen; Phillips, Natalie A; Savundranayagam, Marie Y; Wittich, Walter

    2018-01-01

    The objective of the current study was to understand the added effects of having a sensory impairment (vision and/or hearing impairment) in combination with cognitive impairment with respect to health-related outcomes among older adults (65+ years old) receiving home care or residing in a long-term care (LTC) facility in Ontario, Canada. Cross-sectional analyses were conducted using existing data collected with one of two interRAI assessments, one for home care (n = 291,824) and one for LTC (n = 110,578). Items in the assessments were used to identify clients with single sensory impairments (e.g., vision only [VI], hearing only [HI]), dual sensory impairment (DSI; i.e., vision and hearing) and those with cognitive impairment (CI). We defined seven mutually exclusive groups based on the presence of single or combined impairments. The rate of people having all three impairments (i.e., CI+DSI) was 21.3% in home care and 29.2% in LTC. Across the seven groups, individuals with all three impairments were the most likely to report loneliness, to have a reduction in social engagement, and to experience reduced independence in their activities of daily living (ADLs) and instrumental ADLs (IADLs). Communication challenges were highly prevalent in this group, at 38.0% in home care and 49.2% in LTC. In both care settings, communication difficulties were more common in the CI+DSI group versus the CI-alone group. The presence of combined sensory and cognitive impairments is high among older adults in these two care settings and having all three impairments is associated with higher rates of negative outcomes than the rates for those having CI alone. There is a rising imperative for all health care professionals to recognize the potential presence of hearing, vision and cognitive impairments in those for whom they provide care, to ensure that basic screening occurs and to use those results to inform care plans.

  16. Virtual Vision

    Science.gov (United States)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  17. Evaluation of extreme learning machine for classification of individual and combined finger movements using electromyography on amputees and non-amputees.

    Science.gov (United States)

    Anam, Khairul; Al-Jumaily, Adel

    2017-01-01

    The success of myoelectric pattern recognition (M-PR) mostly relies on the features extracted and classifier employed. This paper proposes and evaluates a fast classifier, extreme learning machine (ELM), to classify individual and combined finger movements on amputees and non-amputees. ELM is a single hidden layer feed-forward network (SLFN) that avoids iterative learning by determining input weights randomly and output weights analytically. Therefore, it can accelerate the training time of SLFNs. In addition to the classifier evaluation, this paper evaluates various feature combinations to improve the performance of M-PR and investigate some feature projections to improve the class separability of the features. Different from other studies on the implementation of ELM in the myoelectric controller, this paper presents a complete and thorough investigation of various types of ELMs including the node-based and kernel-based ELM. Furthermore, this paper provides comparisons of ELMs and other well-known classifiers such as linear discriminant analysis (LDA), k-nearest neighbour (kNN), support vector machine (SVM) and least-square SVM (LS-SVM). The experimental results show the most accurate ELM classifier is radial basis function ELM (RBF-ELM). The comparison of RBF-ELM and other well-known classifiers shows that RBF-ELM is as accurate as SVM and LS-SVM but faster than the SVM family; it is superior to LDA and kNN. The experimental results also indicate that the accuracy gap of the M-PR on the amputees and non-amputees is not too much with the accuracy of 98.55% on amputees and 99.5% on the non-amputees using six electromyography (EMG) channels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Dictionary of computer vision and image processing

    CERN Document Server

    Fisher, Robert B; Dawson-Howe, Kenneth; Fitzgibbon, Andrew; Robertson, Craig; Trucco, Emanuele; Williams, Christopher K I

    2013-01-01

    Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build u

  19. Developing new VO2max prediction models from maximal, submaximal and questionnaire variables using support vector machines combined with feature selection.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih; George, James

    2016-12-01

    Maximal oxygen uptake (VO2max) is an essential part of health and physical fitness, and refers to the highest rate of oxygen consumption an individual can attain during exhaustive exercise. In this study, for the first time in the literature, we combine the triple of maximal, submaximal and questionnaire variables to propose new VO2max prediction models using Support Vector Machines (SVM's) combined with the Relief-F feature selector to predict and reveal the distinct predictors of VO2max. For comparison purposes, hybrid models based on double combinations of maximal, submaximal and questionnaire variables have also been developed. By utilizing 10-fold cross-validation, the performance of the models has been calculated using multiple correlation coefficient (R) and root mean square error (RMSE). The results show that the best values of R and RMSE, with 0.94 and 2.92mLkg-1min-1 respectively, have been obtained by combining the triple of relevantly identified maximal, submaximal and questionnaire variables. Compared with the results of the rest of hybrid models in this study and the other prediction models in literature, the reported values of R and RMSE have been found to be considerably more accurate. The predictor variables gender, age, maximal heart rate (MX-HR), submaximal ending speed (SM-ES) of the treadmill and Perceived Functional Ability (Q-PFA) questionnaire have been found to be the most relevant variables in predicting VO2max. The results have also been compared with that of Multilayer Perceptron (MLP) and Tree Boost (TB), and it is seen that SVM significantly outperforms other regression methods for prediction of VO2max. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Tritium storage plant based on a combination of St707 and St737 getter alloy beds for high field fusion machines

    Energy Technology Data Exchange (ETDEWEB)

    Bonizzoni, G.; Gervasini, G.; Ghezzi, F. (Consiglio Nazionale delle Ricerche, Milan (Italy). Lab. di Fisica del Plasma); Conte, A. (Milan Univ. (Italy). Ist. di Fisica); Gatto, G.; Rigamonti, M. (Consorzio IGNITOR, Turin (Italy))

    1990-01-01

    Thermonuclear fusion machines (which will be operated with D-T mixtures) should provide the tritium storage and supply systems with safety conditions. In order to prevent possible accidents, with a large release of tritium, it must be trapped in solid and reversible solution forms by absorption beds. Moreover, residual gaseous form tritium in the pipelines, and permeation through the primary containment system, must be minimized. For storage, transfer, injection and recovery, a suitable system can be designed which uses metallic getter beds. Reversible solid solutions by tritium sorption are formed with low residual partial pressure, and re-emission by heating at low temperatures, which results in the reduction of permeation. This work shows the possibility of the use of a combination of two Zr-V-Fe getter beds with different alloy compositions as an alternative to the usual uranium beds. In particular, the characterization of the new St737 getter alloy is carried out. Advantages of the combination of the new getter with the well-known St707 getter alloy are presented. (author).

  1. Older Adults With a Combination of Vision and Hearing Impairment Experience Higher Rates of Cognitive Impairment, Functional Dependence, and Worse Outcomes Across a Set of Quality Indicators.

    Science.gov (United States)

    Davidson, Jacob G S; Guthrie, Dawn M

    2017-08-01

    Hearing and vision impairment were examined across several health-related outcomes and across a set of quality indicators (QIs) in home care clients with both vision and hearing loss (or dual sensory impairment [DSI]). Data collected using the Resident Assessment Instrument for Home Care (RAI-HC) were analyzed in a sample of older home care clients. The QIs represent the proportion of clients experiencing negative outcomes (e.g., falls, social isolation). The average age of clients was 82.8 years ( SD = 7.9), 20.5% had DSI and 8.5% had a diagnosis of Alzheimer's disease (AD). Clients with DSI were more likely to have a diagnosis of dementia (not AD), have functional impairments, report loneliness, and have higher rates across 20 of the 22 QIs, including communication difficulty and cognitive decline. Clients with highly impaired hearing, and any visual impairment, had the highest QI rates. Individuals with DSI experience higher rates of adverse events across many health-related outcomes and QIs. Understanding the unique contribution of hearing and vision in this group can promote optimal quality of care.

  2. Accelerating Monte Carlo Molecular Simulations Using Novel Extrapolation Schemes Combined with Fast Database Generation on Massively Parallel Machines

    KAUST Repository

    Amir, Sahar Z.

    2013-05-01

    We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L

  3. HUMAN MACHINE COOPERATIVE TELEROBOTICS

    Energy Technology Data Exchange (ETDEWEB)

    William R. Hamel; Spivey Douglass; Sewoong Kim; Pamela Murray; Yang Shou; Sriram Sridharan; Ge Zhang; Scott Thayer; Rajiv V. Dubey

    2003-06-30

    described as Human Machine Cooperative Telerobotics (HMCTR). The HMCTR combines the telerobot with robotic control techniques to improve the system efficiency and reliability in teleoperation mode. In this topical report, the control strategy, configuration and experimental results of Human Machines Cooperative Telerobotics (HMCTR), which modifies and limits the commands of human operator to follow the predefined constraints in the teleoperation mode, is described. The current implementation is a laboratory-scale system that will be incorporated into an engineering-scale system at the Oak Ridge National Laboratory in the future.

  4. Pediatric Low Vision

    Science.gov (United States)

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  5. Patient-related quality assurance with different combinations of treatment planning systems, techniques, and machines. A multi-institutional survey

    Energy Technology Data Exchange (ETDEWEB)

    Steiniger, Beatrice; Schwedas, Michael; Weibert, Kirsten; Wiezorek, Tilo [University Hospital Jena, Department of Radiation Oncology, Jena (Germany); Berger, Rene [SRH Hospital Gera, Department of Radiation Oncology, Gera (Germany); Eilzer, Sabine [Martin-Luther-Hospital, Radiation Therapy, Berlin (Germany); Kornhuber, Christine [University Hospital Halle, Department of Radiation Oncology, Halle (Saale) (Germany); Lorenz, Kathleen [Hospital of Chemnitz, Department for Radiation Oncology, Chemnitz (Germany); Peil, Torsten [MVZ Center for Radiation Oncology Halle GmbH, Halle (Saale) (Germany); Reiffenstuhl, Carsten [University Hospital Carl Gustav Carus, Department of Radiation Oncology, Dresden (Germany); Schilz, Johannes [Helios Hospital Erfurt, Department of Radiation Oncology, Erfurt (Germany); Schroeder, Dirk [SRH Central Hospital Suhl, Department of Radiation Oncology, Suhl (Germany); Pensold, Stephanie [Community Hospital Dresden-Friedrichstadt, Department of Radiation Oncology, Dresden (Germany); Walke, Mathias [Otto-von-Guericke University Magdeburg, Department of Radiation Oncology, Magdeburg (Germany); Wolf, Ulrich [University Hospital Leipzig, Department of Radiation Oncology, Leipzig (Germany)

    2017-01-15

    This project compares the different patient-related quality assurance systems for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) techniques currently used in the central Germany area with an independent measuring system. The participating institutions generated 21 treatment plans with different combinations of treatment planning systems (TPS) and linear accelerators (LINAC) for the QUASIMODO (Quality ASsurance of Intensity MODulated radiation Oncology) patient model. The plans were exposed to the ArcCHECK measuring system (Sun Nuclear Corporation, Melbourne, FL, USA). The dose distributions were analyzed using the corresponding software and a point dose measured at the isocenter with an ionization chamber. According to the generally used criteria of a 10 % threshold, 3 % difference, and 3 mm distance, the majority of plans investigated showed a gamma index exceeding 95 %. Only one plan did not fulfill the criteria and three of the plans did not comply with the commonly accepted tolerance level of ±3 % in point dose measurement. Using only one of the two examined methods for patient-related quality assurance is not sufficiently significant in all cases. (orig.) [German] Im Rahmen des Projekts sollten die verschiedenen derzeit im mitteldeutschen Raum eingesetzten patientenbezogenen Qualitaetssicherungssysteme zur intensitaetsmodulierten Radiotherapie (IMRT) und volumenmodulierten Arc-Radiotherapie (VMAT) mit einem unabhaengigen Messsystem verglichen werden. Die teilnehmenden Einrichtungen berechneten insgesamt 21 Bestrahlungsplaene mit verschiedenen Planungssystemen (TPS) und Linearbeschleunigern (LINAC) fuer das Patientenmodell QUASIMODO (Quality ASsurance of Intensity MODulated radiation Oncology), die dann auf das ArcCHECK-Phantom (Sun Nuclear Corporation, Melbourne, FL, USA) uebertragen und abgestrahlt wurden. Zur Auswertung wurde sowohl eine Punktmessung im Isozentrum als auch die Dosisverteilung in der Diodenebene des

  6. Discrete curvatures combined with machine learning for automated extraction of impact craters on 3D topographic meshes

    Science.gov (United States)

    Christoff, Nicole; Jorda, Laurent; Viseur, Sophie; Bouley, Sylvain; Manolova, Agata; Mari, Jean-Luc

    2017-04-01

    number of false negative detections compared to previous approaches based on 2.5D data processing. The proposed method was validated on a Mars dataset, including a numerical topography acquired by the Mars Orbiter Laser Altimeter (MOLA) instrument and combined with Barlow et al. (2000) crater database. Keywords: geometric modeling, mesh processing, neural network, discrete curvatures, crater detection, planetary science.

  7. Machine Translation

    Institute of Scientific and Technical Information of China (English)

    张严心

    2015-01-01

    As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.

  8. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  9. Human factors issues in the use of night vision devices

    Science.gov (United States)

    Kaiser, Mary K.; Foyle, David C.

    1991-01-01

    An account is given of the critical human factors that arise in field data on the differences between night vision displays and unaided day vision. Attention is given to the findings of empirical studies of performance on rotorcraft-flight-relevant perceptual tasks in which depth and distance perception are critical factors. Suggestions are made for man-machine-critical component design modifications in current night vision systems.

  10. Binocular Vision

    Science.gov (United States)

    Blake, Randolph; Wilson, Hugh

    2010-01-01

    This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722

  11. Robot Vision

    Science.gov (United States)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  12. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  13. Impairments to Vision

    Science.gov (United States)

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  14. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  15. A novel framework for the identification of drug target proteins: Combining stacked auto-encoders with a biased support vector machine.

    Directory of Open Access Journals (Sweden)

    Qi Wang

    Full Text Available The identification of drug target proteins (IDTP plays a critical role in biometrics. The aim of this study was to retrieve potential drug target proteins (DTPs from a collected protein dataset, which represents an overwhelming task of great significance. Previously reported methodologies for this task generally employ protein-protein interactive networks but neglect informative biochemical attributes. We formulated a novel framework utilizing biochemical attributes to address this problem. In the framework, a biased support vector machine (BSVM was combined with the deep embedded representation extracted using a deep learning model, stacked auto-encoders (SAEs. In cases of non-drug target proteins (NDTPs contaminated by DTPs, the framework is beneficial due to the efficient representation of the SAE and relief of the imbalance effect by the BSVM. The experimental results demonstrated the effectiveness of our framework, and the generalization capability was confirmed via comparisons to other models. This study is the first to exploit a deep learning model for IDTP. In summary, nearly 23% of the NDTPs were predicted as likely DTPs, which are awaiting further verification based on biomedical experiments.

  16. Combined impairments in vision, hearing and cognition are associated with greater levels of functional and communication difficulties than cognitive impairment alone: Analysis of interRAI data for home care and long-term care recipients in Ontario.

    Directory of Open Access Journals (Sweden)

    Dawn M Guthrie

    Full Text Available The objective of the current study was to understand the added effects of having a sensory impairment (vision and/or hearing impairment in combination with cognitive impairment with respect to health-related outcomes among older adults (65+ years old receiving home care or residing in a long-term care (LTC facility in Ontario, Canada.Cross-sectional analyses were conducted using existing data collected with one of two interRAI assessments, one for home care (n = 291,824 and one for LTC (n = 110,578. Items in the assessments were used to identify clients with single sensory impairments (e.g., vision only [VI], hearing only [HI], dual sensory impairment (DSI; i.e., vision and hearing and those with cognitive impairment (CI. We defined seven mutually exclusive groups based on the presence of single or combined impairments.The rate of people having all three impairments (i.e., CI+DSI was 21.3% in home care and 29.2% in LTC. Across the seven groups, individuals with all three impairments were the most likely to report loneliness, to have a reduction in social engagement, and to experience reduced independence in their activities of daily living (ADLs and instrumental ADLs (IADLs. Communication challenges were highly prevalent in this group, at 38.0% in home care and 49.2% in LTC. In both care settings, communication difficulties were more common in the CI+DSI group versus the CI-alone group.The presence of combined sensory and cognitive impairments is high among older adults in these two care settings and having all three impairments is associated with higher rates of negative outcomes than the rates for those having CI alone. There is a rising imperative for all health care professionals to recognize the potential presence of hearing, vision and cognitive impairments in those for whom they provide care, to ensure that basic screening occurs and to use those results to inform care plans.

  17. Ideas for Teaching Vision and Visioning

    Science.gov (United States)

    Quijada, Maria Alejandra

    2017-01-01

    In teaching leadership, a key element to include should be a discussion about vision: what it is, how to communicate it, and how to ensure that it is effective and shared. This article describes a series of exercises that rely on videos to illustrate different aspects of vision and visioning, both in the positive and in the negative. The article…

  18. Glioma survival prediction with the combined analysis of in vivo 11C-MET-PET, ex vivo and patient features by supervised machine learning.

    Science.gov (United States)

    Papp, Laszlo; Poetsch, Nina; Grahovac, Marko; Schmidbauer, Victor; Woehrer, Adelheid; Preusser, Matthias; Mitterhauser, Markus; Kiesel, Barbara; Wadsak, Wolfgang; Beyer, Thomas; Hacker, Marcus; Traub-Weidinger, Tatjana

    2017-11-24

    Gliomas are the most common types of tumors in the brain. While the definite diagnosis is routinely made ex vivo by histopathologic and molecular examination, diagnostic work-up of patients with suspected glioma is mainly done by using magnetic resonance imaging (MRI). Nevertheless, L-S-methyl-11C-methionine (11C-MET) Positron Emission Tomography (PET) holds a great potential in characterization of gliomas. The aim of this study was to establish machine learning (ML) driven survival models for glioma built on 11C-MET-PET, ex vivo and patient characteristics. Methods: 70 patients with a treatment naïve glioma, who had a positive 11C-MET-PET and histopathology-derived ex vivo feature extraction, such as World Health Organization (WHO) 2007 tumor grade, histology and isocitrate dehydrogenase (IDH1-R132H) mutation status were included. The 11C-MET-positive primary tumors were delineated semi-automatically on PET images followed by the feature extraction of tumor-to-background ratio based general and higher-order textural features by applying five different binning approaches. In vivo and ex vivo features, as well as patient characteristics (age, weight, height, body-mass-index, Karnofsky-score) were merged to characterize the tumors. Machine learning approaches were utilized to identify relevant in vivo, ex vivo and patient features and their relative weights for 36 months survival prediction. The resulting feature weights were used to establish three predictive models per binning configuration based on a combination of: in vivo/ex vivo and clinical patient information (M36IEP), in vivo and patient-only information (M36IP), and in vivo only (M36I). In addition a binning-independent ex vivo and patient-only (M36EP) model was created. The established models were validated in a Monte Carlo (MC) cross-validation scheme. Results: Most prominent ML-selected and -weighted features were patient and ex vivo based followed by in vivo features. The highest area under the curve

  19. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  20. Inspection of the integrity of surface mounted integrated circuits on a printed circuit board using vision

    OpenAIRE

    Yakoub, Imad

    1991-01-01

    Machine vision technology has permeated many areas of industry, and automated inspection systems are playing increasingly important roles in many production processes. Electronic manufacturing is a good example of the integration of vision based feedback in manufacturing and the assembly of surface mount PCBs is typical of the technology involved. There are opportunities to use machine vision during different stages of the surface mount process. The problem in the inspection of solder joints ...

  1. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  2. Generative Complete Machining in Finished-Part Quality

    National Research Council Canada - National Science Library

    2017-01-01

    ...: With the laser deposition welding and 5-axis simultaneous machining, the machine combines additive and subtractive manufacturing processes, thus offering entirely new levels of freedom with regard...

  3. An integrated anti-arrhythmic target network of a Chinese medicine compound, Wenxin Keli, revealed by combined machine learning and molecular pathway analysis.

    Science.gov (United States)

    Wang, Taiyi; Lu, Ming; Du, Qunqun; Yao, Xi; Zhang, Peng; Chen, Xiaonan; Xie, Weiwei; Li, Zheng; Ma, Yuling; Zhu, Yan

    2017-05-02

    Wenxin Keli (WK), a Chinese patent medicine, is known to be effective against cardiac arrhythmias and heart failure. Although a number of electrophysiological findings regarding its therapeutic effect have been reported, the active components and system-level characterizations of the component-target interactions of WK have yet to be elucidated. In the current study, we present the first report of a new protective effect of WK on suppressing anti-arrhythmic-agent-induced arrhythmias. In a model of isolated guinea pig hearts, rapid perfusion of quinidine altered the heart rate and prolonged the Q-T interval. Pretreatment with WK significantly prevented quinidine-induced arrhythmias. To explain the therapeutic and protective effects of WK, we constructed an integrated multi-target pharmacological mechanism prediction workflow in combination with machine learning and molecular pathway analysis. This workflow had the ability to predict and rank the probability of each compound interacting with 1715 target proteins simultaneously. The ROC value statistics showed that 97.786% of the values for target prediction were larger than 0.8. We applied this model to carry out target prediction and network analysis for the identified components of 5 herbs in WK. Using the 124 potential anti-arrhythmic components and the 30 corresponding protein targets obtained, an integrative anti-arrhythmic molecular mechanism of WK was proposed. Emerging drug/target networks suggested ion channel and intracellular calcium and autonomic nervous and hormonal regulation had critical roles in WK-mediated anti-arrhythmic activity. A validation of the proposed mechanisms was achieved by demonstrating that calaxin, one of the WK components from Gansong, dose-dependently blocked its predicted target Ca V 1.2 channel in an electrophysiological assay.

  4. Vision Systems with the Human in the Loop

    Science.gov (United States)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  5. North American Natural Gas Vision

    Science.gov (United States)

    2005-01-01

    Pemex Comercio Internacional (Pemex International), responsible for international trade. 30 North American Natural Gas Vision In 1995, the...important, running for 710 km from Ciudad Pemex to Mérida in the Yucatan Peninsula. It was built to provide natural gas to the Mérida III combined cycle

  6. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  7. Mechanical design of machine components

    CERN Document Server

    Ugural, Ansel C

    2015-01-01

    Mechanical Design of Machine Components, Second Edition strikes a balance between theory and application, and prepares students for more advanced study or professional practice. It outlines the basic concepts in the design and analysis of machine elements using traditional methods, based on the principles of mechanics of materials. The text combines the theory needed to gain insight into mechanics with numerical methods in design. It presents real-world engineering applications, and reveals the link between basic mechanics and the specific design of machine components and machines. Divided into three parts, this revised text presents basic background topics, deals with failure prevention in a variety of machine elements and covers applications in design of machine components as well as entire machines. Optional sections treating special and advanced topics are also included.Key Features of the Second Edition:Incorporates material that has been completely updated with new chapters, problems, practical examples...

  8. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  9. Bionic machines and systems

    Energy Technology Data Exchange (ETDEWEB)

    Halme, A.; Paanajaervi, J. (eds.)

    2004-07-01

    Introduction Biological systems form a versatile and complex entirety on our planet. One evolutionary branch of primates, called humans, has created an extraordinary skill, called technology, by the aid of which it nowadays dominate life on the planet. Humans use technology for producing and harvesting food, healthcare and reproduction, increasing their capability to commute and communicate, defending their territory etc., and to develop more technology. As a result of this, humans have become much technology dependent, so that they have been forced to form a specialized class of humans, called engineers, who take care of the knowledge of technology developing it further and transferring it to later generations. Until now, technology has been relatively independent from biology, although some of its branches, e.g. biotechnology and biomedical engineering, have traditionally been in close contact with it. There exist, however, an increasing interest to expand the interface between technology and biology either by directly utilizing biological processes or materials by combining them with 'dead' technology, or by mimicking in technological solutions the biological innovations created by evolution. The latter theme is in focus of this report, which has been written as the proceeding of the post-graduate seminar 'Bionic Machines and Systems' held at HUT Automation Technology Laboratory in autumn 2003. The underlaying idea of the seminar was to analyze biological species by considering them as 'robotic machines' having various functional subsystems, such as for energy, motion and motion control, perception, navigation, mapping and localization. We were also interested about intelligent capabilities, such as learning and communication, and social structures like swarming behavior and its mechanisms. The word 'bionic machine' comes from the book which was among the initial material when starting our mission to the fascinating world

  10. Permutation Machines.

    Science.gov (United States)

    Bhatia, Swapnil; LaBoda, Craig; Yanez, Vanessa; Haddock-Angelli, Traci; Densmore, Douglas

    2016-08-19

    We define a new inversion-based machine called a permuton of n genetic elements, which allows the n elements to be rearranged in any of the n·(n - 1)·(n - 2)···2 = n! distinct orderings. We present two design algorithms for architecting such a machine. We define a notion of a feasible design and use the framework to discuss the feasibility of the permuton architectures. We have implemented our design algorithms in a freely usable web-accessible software for exploration of these machines. Permutation machines could be used as memory elements or state machines and explicitly illustrate a rational approach to designing biological systems.

  11. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  12. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  13. Artificial Vision, New Visual Modalities and Neuroadaptation

    Directory of Open Access Journals (Sweden)

    Hilmi Or

    2012-01-01

    Full Text Available To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known system includes Gabor filter and Gabor patch which work on edge perception, describing the visual perception in the best known way. These systems are used today in industry and technology of machines, robots and computers to provide their "seeing". These definitions are used beyond the machinery in humans for neuroadaptation in new visual modalities after some eye surgeries or to improve the quality of some already known visual modalities. Beside this, “the blindsight” -which was not known to exist until 35 years ago - can be stimulated with visual exercises. Gabor system is a description of visual perception definable in machine vision as well as in human visual perception. This system is used today in robotic vision. There are new visual modalities which arise after some eye surgeries or with the use of some visual optical devices. Also, blindsight is a different visual modality starting to be defined even though the exact etiology is not known. In all the new visual modalities, new vision stimulating therapies using the Gabor systems can be applied. (Turk J Oph thal mol 2012; 42: 61-5

  14. Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers

    Science.gov (United States)

    Sanal Kumar, K. P.; Bhavani, R., Dr.

    2017-08-01

    Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.

  15. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  16. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M.

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  17. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  18. Introduction: Minds, Bodies, Machines

    Directory of Open Access Journals (Sweden)

    Deirdre Coleman

    2008-10-01

    Full Text Available This issue of 19 brings together a selection of essays from an interdisciplinary conference on 'Minds, Bodies, Machines' convened last year by Birkbeck's Centre for Nineteenth-Century Studies, University of London, in partnership with the English programme, University of Melbourne and software developers Constraint Technologies International (CTI. The conference explored the relationship between minds, bodies and machines in the long nineteenth century, with a view to understanding the history of our technology-driven, post-human visions. It is in the nineteenth century that the relationship between the human and the machine under post-industrial capitalism becomes a pervasive theme. From Blake on the mills of the mind by which we are enslaved, to Carlyle's and Arnold's denunciation of the machinery of modern life, from Dickens's sooty fictional locomotive Mr Pancks, who 'snorted and sniffed and puffed and blew, like a little labouring steam-engine', and 'shot out […]cinders of principles, as if it were done by mechanical revolvency', to the alienated historical body of the late-nineteenth-century factory worker under Taylorization, whose movements and gestures were timed, regulated and rationalised to maximize efficiency; we find a cultural preoccupation with the mechanisation of the nineteenth-century human body that uncannily resonates with modern dreams and anxieties around technologies of the human.

  19. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  20. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques.

    Science.gov (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad

    2016-01-01

    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  1. Precise positioning method for multi-process connecting based on binocular vision

    Science.gov (United States)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  2. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  3. Design of an explainable machine learning challenge for video interviews

    NARCIS (Netherlands)

    Escalante, H.J.; Guyon, I.; Escalera, S.; Jacques, J.; Madadi, M.; Baró , X.; Ayache, S.; Viegas, E.; Gü ç lü tü rk, Y.; Gü ç lü , U.; Gerven, M.A.J. van; Lier, R.J. van

    2017-01-01

    This paper reviews and discusses research advances on "explainable machine learning" in computer vision. We focus on a particular area of the "Looking at People" (LAP) thematic domain: first impressions and personality analysis. Our aim is to make the computational intelligence and computer vision

  4. Monel Machining

    Science.gov (United States)

    1983-01-01

    Castle Industries, Inc. is a small machine shop manufacturing replacement plumbing repair parts, such as faucet, tub and ballcock seats. Therese Castley, president of Castle decided to introduce Monel because it offered a chance to improve competitiveness and expand the product line. Before expanding, Castley sought NERAC assistance on Monel technology. NERAC (New England Research Application Center) provided an information package which proved very helpful. The NASA database was included in NERAC's search and yielded a wealth of information on machining Monel.

  5. Chemicals Industry Vision

    Energy Technology Data Exchange (ETDEWEB)

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  6. Living with Low Vision

    Science.gov (United States)

    ... Life TIPS To Its Fullest LIVING WITH LOW VISION Savings Medical Bills A VARIETY OF EYE CONDITIONS, ... which occupational therapy practitioners help people with low vision to function at the highest possible level. • Prevent ...

  7. Cataract Vision Simulator

    Science.gov (United States)

    ... and Videos: What Do Cataracts Look Like? Cataract Vision Simulator Leer en Español: Simulador: Catarata Jun. 11, 2014 How do cataracts affect your vision? A cataract is a clouding of the eye's ...

  8. Vision - night blindness

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003039.htm Vision - night blindness To use the sharing features on ... page, please enable JavaScript. Night blindness is poor vision at night or in dim light. Considerations Night ...

  9. A method to combine target volume data from 3D and 4D planned thoracic radiotherapy patient cohorts for machine learning applications

    NARCIS (Netherlands)

    Johnson, Corinne; Price, Gareth; Khalifa, Jonathan; Faivre-Finn, Corinne; Dekker, Andre; Moore, Christopher; van Herk, Marcel

    2017-01-01

    The gross tumour volume (GTV) is predictive of clinical outcome and consequently features in many machine-learned models. 4D-planning, however, has prompted substitution of the GTV with the internal gross target volume (iGTV). We present and validate a method to synthesise GTV data from the iGTV,

  10. Analysis of accidents leading to amputations associated with operating with press machines, using Ishikawa and SCAT Combined method in a car manufacturing company

    Directory of Open Access Journals (Sweden)

    J. Nematolahi

    2015-12-01

      Conclusion: According to results, the main interfce causes of accidents leading to amputation due to operating with press machines is hurry at work because of increased production volume particularly by contractor companies. Furthermore, non-dynamic HSE system accompanied by ineffective supervision of personnel’s unsafe acts by the first layers of management are recognized as the basic causes of such accidents.

  11. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  12. Machining of Complex Sculptured Surfaces

    CERN Document Server

    2012-01-01

    The machining of complex sculptured surfaces is a global technological topic in modern manufacturing with relevance in both industrialized and emerging in countries particularly within the moulds and dies sector whose applications include highly technological industries such as the automotive and aircraft industry. Machining of Complex Sculptured Surfaces considers new approaches to the manufacture of moulds and dies within these industries. The traditional technology employed in the manufacture of moulds and dies combined conventional milling and electro-discharge machining (EDM) but this has been replaced with  high-speed milling (HSM) which has been applied in roughing, semi-finishing and finishing of moulds and dies with great success. Machining of Complex Sculptured Surfaces provides recent information on machining of complex sculptured surfaces including modern CAM systems and process planning for three and five axis machining as well as explanations of the advantages of HSM over traditional methods ra...

  13. Blindness and vision loss

    Science.gov (United States)

    ... have low vision, you may have trouble driving, reading, or doing small tasks such as sewing or ... lost vision. You should never ignore vision loss, thinking it will get better. Contact an ... A.D.A.M.'s editorial policy , editorial process and privacy policy . A.D.A.M. is ...

  14. Comparing Active Vision Models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  15. Your Child's Vision

    Science.gov (United States)

    ... the Flu Vaccine? Eating Disorders Arrhythmias Your Child's Vision KidsHealth > For Parents > Your Child's Vision Print A A A What's in this article? ... La vista de su hijo Healthy eyes and vision are a critical part of kids' development. Their ...

  16. Comparing active vision models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  17. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  18. A child's vision.

    Science.gov (United States)

    Nye, Christina

    2014-06-01

    Implementing standard vision screening techniques in the primary care practice is the most effective means to detect children with potential vision problems at an age when the vision loss may be treatable. A critical period of vision development occurs in the first few weeks of life; thus, it is imperative that serious problems are detected at this time. Although it is not possible to quantitate an infant's vision, evaluating ocular health appropriately can mean the difference between sight and blindness and, in the case of retinoblastoma, life or death. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. An active role for machine learning in drug development

    Science.gov (United States)

    Murphy, Robert F.

    2014-01-01

    Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249

  20. Efficient simulations of multicounter machines (Preliminary version)

    NARCIS (Netherlands)

    P.M.B. Vitányi (Paul)

    1982-01-01

    textabstractAn oblivious 1-tape Turing machine can on-line simulate a multicounter machine in linear time and logarithmic space. This leads to a linear cost combinational logic network implementing the first n steps of a multicounter machine and also to a linear time/logarithmic space on-line

  1. Acquired color vision deficiency.

    Science.gov (United States)

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  3. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  4. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available Page 1 of 7 25th International Conference of CAD/CAM, Robotics & Factories of the Future Conference, 13-16 July 2010, Pretoria, South Africa CLUSTERED FEATURES FOR USE IN STEREO VISION SLAM Deon Joubert1 1 CSIR Pretoria, South Africa e... but it is computationally expensive and difficult to implement. New feature manipulation techniques are proposed which incorporate relational and positional information of the features into the extraction and data association steps. Keywords: Stereo Vision, Machine...

  5. Vision Guided X-Y Table For Inspection

    Science.gov (United States)

    Chen, Michel J.

    1983-05-01

    This paper demonstrates an example of utilization of machine intelligence in automation. A system was developed to perform precision part inspection and automated workpiece handling. This system consists of a robot which is utilized to perform simple pick-and-place function, a MI VS-110 machine vision system which provides a vision library and the functions of masking and programmable image overlay, and an X-Y-θ table. This setup demonstrates a simplified approach to machine vision and automation. In this complete sensorimotor system, BASIC was the programming language to develop and integrate the control software for the inspection process by using the MI DS-100 machine vision development system. By calling vision functions, X-Y-θ table commands and simple robot commands, the task of parts" inspection under high- and low-resolution cameras, sorting, as well as disposition is shown to be easy to conceptualize and implement. This robot system can perform tasks without the necessity of prealigning or jigging workpieces. Numerous other applications can be accomplished by adopting a similar methodology.

  6. EFSA NDA Panel (EFSA Panel on Dietetic Products, Nutrition and Allergies), 2014. Scientific Opinion on the substantiation of a health claim related to a combination of lutein and zeaxanthin and improved vision under bright light conditions pursuant to Article 13(5) of Regulation (EC) No 1924/2006

    DEFF Research Database (Denmark)

    Tetens, Inge

    2014-01-01

    EFSA NDA Panel (EFSA Panel on Dietetic Products, Nutrition and Allergies), 2014. Scientific Opinion on the substantiation of a health claim related to a combination of lutein and zeaxanthin and improved vision under bright light conditions pursuant to Article 13(5) of Regulation (EC) No 1924/2006....

  7. Fully automatic CNC machining production system

    Directory of Open Access Journals (Sweden)

    Lee Jeng-Dao

    2017-01-01

    Full Text Available Customized manufacturing is increasing years by years. The consumption habits change has been cause the shorter of product life cycle. Therefore, many countries view industry 4.0 as a target to achieve more efficient and more flexible automated production. To develop an automatic loading and unloading CNC machining system via vision inspection is the first step in industrial upgrading. CNC controller is adopted as the main controller to command to the robot, conveyor, and other equipment in this study. Moreover, machine vision systems are used to detect position of material on the conveyor and the edge of the machining material. In addition, Open CNC and SCADA software will be utilized to make real-time monitor, remote system of control, alarm email notification, and parameters collection. Furthermore, RFID has been added to employee classification and management. The machine handshaking has been successfully proposed to achieve automatic vision detect, edge tracing measurement, machining and system parameters collection for data analysis to accomplish industrial automation system integration with real-time monitor.

  8. Study of on-machine error identification and compensation methods for micro machine tools

    Science.gov (United States)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-08-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  9. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  10. Representational Machines

    DEFF Research Database (Denmark)

    Petersson, Dag; Dahlgren, Anna; Vestberg, Nina Lager

    to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...

  11. Combination of Machining Parameters to Optimize Surface Roughness and Chip Thickness during End Milling Process on Aluminium 6351-T6 Alloy Using Taguchi Design Method

    Directory of Open Access Journals (Sweden)

    Reddy Sreenivasulu

    2016-12-01

    Full Text Available In any machining operations, quality is the important conflicting objective. In order to give assurance for high productivity, some extent of quality has to be compromised. Similarly productivity will be decreased while the efforts are channelized to enhance quality. In this study,  the experiments were carried out on a CNC vertical machining center  to perform 10mm slots on Al 6351-T6 alloy work piece by K10 carbide, four flute end milling cutter. Furthermore the cutting speed, the feed rate and depth of cut are regulated in this experiment. Each experiment was conducted three times and the surface roughness and chip thickness was measured by a surface analyser of Surf Test-211 series (Mitutoyo and Digital Micrometer (Mitutoyo with least count 0.001 mm respectively. The selection of orthogonal array is concerned with the total degree of freedom of process parameters. Total degree of freedom (DOF associated with three parameters is equal to 6 (3X2.The degree of freedom for the orthogonal array should be greater than or at least equal to that of the process parameters. There by, a L9 orthogonal array having degree of freedom equal to (9-1= 8 8 has been considered .But in present case each experiment is conducted three times, therefore total degree of freedom (9X3-1=26 26 has been considered. Finally, confirmation test (ANOVA was conducted to compare the predicted values with the experimental values confirm its effectiveness in the analysis of surface roughness and chip thickness. Surface Roughness (Ra is greatly reduced from 0.145 µm to 0.1326 µm and the chip thickness (Ct is slightly reduced from 0.1 mm to 0.085 mm, because of in the measurement collected the chips after machining of every experiment, from that randomly selected a few chips for measuring of their thickness using digital micrometer.

  12. PCVMZM: Using the Probabilistic Classification Vector Machines Model Combined with a Zernike Moments Descriptor to Predict Protein–Protein Interactions from Protein Sequences

    Science.gov (United States)

    Wang, Yanbin; You, Zhuhong; Li, Xiao; Chen, Xing; Jiang, Tonghai; Zhang, Jingting

    2017-01-01

    Protein–protein interactions (PPIs) are essential for most living organisms’ process. Thus, detecting PPIs is extremely important to understand the molecular mechanisms of biological systems. Although many PPIs data have been generated by high-throughput technologies for a variety of organisms, the whole interatom is still far from complete. In addition, the high-throughput technologies for detecting PPIs has some unavoidable defects, including time consumption, high cost, and high error rate. In recent years, with the development of machine learning, computational methods have been broadly used to predict PPIs, and can achieve good prediction rate. In this paper, we present here PCVMZM, a computational method based on a Probabilistic Classification Vector Machines (PCVM) model and Zernike moments (ZM) descriptor for predicting the PPIs from protein amino acids sequences. Specifically, a Zernike moments (ZM) descriptor is used to extract protein evolutionary information from Position-Specific Scoring Matrix (PSSM) generated by Position-Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST). Then, PCVM classifier is used to infer the interactions among protein. When performed on PPIs datasets of Yeast and H. Pylori, the proposed method can achieve the average prediction accuracy of 94.48% and 91.25%, respectively. In order to further evaluate the performance of the proposed method, the state-of-the-art support vector machines (SVM) classifier is used and compares with the PCVM model. Experimental results on the Yeast dataset show that the performance of PCVM classifier is better than that of SVM classifier. The experimental results indicate that our proposed method is robust, powerful and feasible, which can be used as a helpful tool for proteomics research. PMID:28492483

  13. PCVMZM: Using the Probabilistic Classification Vector Machines Model Combined with a Zernike Moments Descriptor to Predict Protein-Protein Interactions from Protein Sequences.

    Science.gov (United States)

    Wang, Yanbin; You, Zhuhong; Li, Xiao; Chen, Xing; Jiang, Tonghai; Zhang, Jingting

    2017-05-11

    Protein-protein interactions (PPIs) are essential for most living organisms' process. Thus, detecting PPIs is extremely important to understand the molecular mechanisms of biological systems. Although many PPIs data have been generated by high-throughput technologies for a variety of organisms, the whole interatom is still far from complete. In addition, the high-throughput technologies for detecting PPIs has some unavoidable defects, including time consumption, high cost, and high error rate. In recent years, with the development of machine learning, computational methods have been broadly used to predict PPIs, and can achieve good prediction rate. In this paper, we present here PCVMZM, a computational method based on a Probabilistic Classification Vector Machines (PCVM) model and Zernike moments (ZM) descriptor for predicting the PPIs from protein amino acids sequences. Specifically, a Zernike moments (ZM) descriptor is used to extract protein evolutionary information from Position-Specific Scoring Matrix (PSSM) generated by Position-Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST). Then, PCVM classifier is used to infer the interactions among protein. When performed on PPIs datasets of Yeast and H. Pylori, the proposed method can achieve the average prediction accuracy of 94.48% and 91.25%, respectively. In order to further evaluate the performance of the proposed method, the state-of-the-art support vector machines (SVM) classifier is used and compares with the PCVM model. Experimental results on the Yeast dataset show that the performance of PCVM classifier is better than that of SVM classifier. The experimental results indicate that our proposed method is robust, powerful and feasible, which can be used as a helpful tool for proteomics research.

  14. A Combination of Machine Learning and Cerebellar-like Neural Networks for the Motor Control and Motor Learning of the Fable Modular Robot

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Pacheco, Moises

    2017-01-01

    We scaled up a bio-inspired control architecture for the motor control and motor learning of a real modular robot. In our approach, the Locally Weighted Projection Regression algorithm (LWPR) and a cerebellar microcircuit coexist, in the form of a Unit Learning Machine. The LWPR algorithm optimizes...... the input space and learns the internal model of a single robot module to command the robot to follow a desired trajectory with its end-effector. The cerebellar-like microcircuit refines the LWPR output delivering corrective commands. We contrasted distinct cerebellar-like circuits including analytical...

  15. Mathematical leadership vision.

    Science.gov (United States)

    Hamburger, Y A

    2000-11-01

    This article is an analysis of a new type of leadership vision, the kind of vision that is becoming increasingly pervasive among leaders in the modern world. This vision appears to offer a new horizon, whereas, in fact it delivers to its target audience a finely tuned version of the already existing ambitions and aspirations of the target audience. The leader, with advisors, has examined the target audience and has used the results of extensive research and statistical methods concerning the group to form a picture of its members' lifestyles and values. On the basis of this information, the leader has built a "vision." The vision is intended to create an impression of a charismatic and transformational leader when, in fact, it is merely a response. The systemic, arithmetic, and statistical methods employed in this operation have led to the coining of the terms mathematical leader and mathematical vision.

  16. Implementing early vision algorithms in analog hardware: an overview

    Science.gov (United States)

    Koch, Christof

    1991-07-01

    In the last ten years, significant progress has been made in understanding the first steps in visual processing. Thus, a large number of algorithms exist that locate edges, compute disparities, estimate motion fields and find discontinuities in depth, motion, color and intensity. However, the application of these algorithms to real-life vision problems has been less successful, mainly because the associated computational cost prevents real-time machine vision implementations on anything but large-scale expensive digital computers. We here review the use of analog, special-purpose vision hardware, integrating image acquisition with early vision algorithms on a single VLSI chip. Such circuits have been designed and successfully tested for edge detection, surface interpolation, computing optical flow and sensor fusion. Thus, it appears that real-time, small, power-lean and robust analog computers are making a limited comeback in the form of highly dedicated, smart vision chips.

  17. FPGA Vision Data Architecture

    Science.gov (United States)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  18. Nontraditional machining processes research advances

    CERN Document Server

    2013-01-01

    Nontraditional machining employs processes that remove material by various methods involving thermal, electrical, chemical and mechanical energy or even combinations of these. Nontraditional Machining Processes covers recent research and development in techniques and processes which focus on achieving high accuracies and good surface finishes, parts machined without burrs or residual stresses especially with materials that cannot be machined by conventional methods. With applications to the automotive, aircraft and mould and die industries, Nontraditional Machining Processes explores different aspects and processes through dedicated chapters. The seven chapters explore recent research into a range of topics including laser assisted manufacturing, abrasive water jet milling and hybrid processes. Students and researchers will find the practical examples and new processes useful for both reference and for developing further processes. Industry professionals and materials engineers will also find Nontraditional M...

  19. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  20. Measuring Vision in Children

    Directory of Open Access Journals (Sweden)

    Petra Verweyen

    2004-01-01

    Full Text Available Measuring vision in children is a special skill requiring time, patience and understanding. Methods should be adapted to the child’s age, abilities, knowledge and experience. Young children are not able to describe their vision or explain their visual symptoms. Through observation, and with information from the mother or guardian, functional vision can be evaluated. While testing and observing children, an experienced assessor notices their responses to visual stimuli. These must be compared with the expected functional vision for children of the same age and abilities, so it is important to know the normal visual development.

  1. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  2. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  3. Dynamic analysis of centrifugal machines rotors supported on ball bearings by combined application of 3D and beam finite element models

    Science.gov (United States)

    Pavlenko, I. V.; Simonovskiy, V. I.; Demianenko, M. M.

    2017-08-01

    This research paper is aimed to investigating rotor dynamics of multistage centrifugal machines with ball bearings by using the computer programs “Critical frequencies of the rotor” and “Forced oscillations of the rotor,” which are implemented the mathematical model based on the use of beam finite elements. Free and forces oscillations of the rotor for the multistage centrifugal oil pump NPS 200-700 are observed by taking into account the analytical dependence of bearing stiffness on rotor speed, which is previously defined on the basis of results’ approximation for the numerical simulation in ANSYS by applying 3D finite elements. The calculations found that characteristic and constrained oscillations of rotor and corresponded to them forms of vibrations, as well as the form of constrained oscillation on the actual frequency for acceptable residual unbalance are determined.

  4. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    Science.gov (United States)

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  5. Archetypal Analysis for Machine Learning

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2010-01-01

    Archetypal analysis (AA) proposed by Cutler and Breiman in [1] estimates the principal convex hull of a data set. As such AA favors features that constitute representative ’corners’ of the data, i.e. distinct aspects or archetypes. We will show that AA enjoys the interpretability of clustering - ...... for K-means [2]. We demonstrate that the AA model is relevant for feature extraction and dimensional reduction for a large variety of machine learning problems taken from computer vision, neuroimaging, text mining and collaborative filtering....

  6. Laser machining of explosives

    Science.gov (United States)

    Perry, Michael D.; Stuart, Brent C.; Banks, Paul S.; Myers, Booth R.; Sefcik, Joseph A.

    2000-01-01

    The invention consists of a method for machining (cutting, drilling, sculpting) of explosives (e.g., TNT, TATB, PETN, RDX, etc.). By using pulses of a duration in the range of 5 femtoseconds to 50 picoseconds, extremely precise and rapid machining can be achieved with essentially no heat or shock affected zone. In this method, material is removed by a nonthermal mechanism. A combination of multiphoton and collisional ionization creates a critical density plasma in a time scale much shorter than electron kinetic energy is transferred to the lattice. The resulting plasma is far from thermal equilibrium. The material is in essence converted from its initial solid-state directly into a fully ionized plasma on a time scale too short for thermal equilibrium to be established with the lattice. As a result, there is negligible heat conduction beyond the region removed resulting in negligible thermal stress or shock to the material beyond a few microns from the laser machined surface. Hydrodynamic expansion of the plasma eliminates the need for any ancillary techniques to remove material and produces extremely high quality machined surfaces. There is no detonation or deflagration of the explosive in the process and the material which is removed is rendered inert.

  7. Military Vision Research Program

    Science.gov (United States)

    2012-10-01

    a result, optic nerve or brain injury can lead to permanent loss of vision or cognitive functions. Unfortunately, there are currently no medical...foveal lesion (macular hole). Vision Res. 39, 2421–2427 (1999). 5. Amsler, M. L’examen qualitatif de la fonction maculaire. Ophthalmologica 114, 248

  8. Computer vision for sports

    DEFF Research Database (Denmark)

    Thomas, Graham; Gade, Rikke; Moeslund, Thomas B.

    2017-01-01

    fixed to players or equipment is generally not possible. This provides a rich set of opportunities for the application of computer vision techniques to help the competitors, coaches and audience. This paper discusses a selection of current commercial applications that use computer vision for sports...

  9. New Term, New Vision?

    Science.gov (United States)

    Ravenhall, Mark

    2011-01-01

    During the affluent noughties it was sometimes said of government that it had "more visions than Mystic Meg and more pilots than British Airways". In 2011, the pilots, the pathfinders, the new initiatives are largely gone--implementation is the name of the game--but the visions remain. The latest one, as it affects adult learners, is in…

  10. Degas: Vision and Perception.

    Science.gov (United States)

    Kendall, Richard

    1988-01-01

    The art of Edgar Degas is discussed in relation to his impaired vision, including amblyopia, later blindness in one eye, corneal scarring, and photophobia. Examined are ways in which Degas compensated for vision problems, and dominant themes of his art such as the process of perception and spots of brilliant light. (Author/JDD)

  11. Jane Addams’ Social Vision

    DEFF Research Database (Denmark)

    Villadsen, Kaspar

    2018-01-01

    resonated with key tenets of social gospel theology, which imbued her texts with an overarching vision of humanity’s progressive history. It is suggested that Addams’ vision of a major transition in industrial society, one involving a BChristian renaissance^ and individuals’ transformation into Bsocialized...

  12. Copenhagen Energy Vision

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Rasmus Søgaard; Connolly, David

    The short-term goal for The City of Copenhagen is a CO2 neutral energy supply by the year 2025, and the long-term vision for Denmark is a 100% renewable energy (RE) supply by the year 2050. In this project, it is concluded that Copenhagen plays a key role in this transition. The long-term vision...

  13. Visions, Actions and Partnerships

    International Development Research Centre (IDRC) Digital Library (Canada)

    freelance

    Evaluation Association (AFREA). Comments on this document can be sent to ccaa@idrc.ca. Introduction. “Visions, actions, partnerships” (VAP) is presented as a participatory tool that can be used ... The tool embraces the philosophy of the Visions actions requests approach (Beaulieu et al,. 2002) based on the formulation of ...

  14. Computer vision cracks the leaf code.

    Science.gov (United States)

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  15. Simulating Turing machines on Maurer machines

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2008-01-01

    In a previous paper, we used Maurer machines to model and analyse micro-architectures. In the current paper, we investigate the connections between Turing machines and Maurer machines with the purpose to gain an insight into computability issues relating to Maurer machines. We introduce ways to

  16. Taking Care of Your Vision

    Science.gov (United States)

    ... Parents - or Other Adults Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...

  17. Vision system for diagnostic task | Merad | Global Journal of Pure ...

    African Journals Online (AJOL)

    Our problem is a diagnostic task. Due to environment degraded conditions, direct measurements are not possible. Due to the rapidity of the machine, human intervention is not possible in case of position fault. So, an oriented vision solution is proposed. The problem must be solved for high velocity industrial tooling ...

  18. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  19. Machine capability index evaluation of machining center

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Won Pyo [Korea Institute of Industrial Technology, Ansan (Korea, Republic of)

    2013-10-15

    Recently, there has been an increasing need to produce more precise products, with only the smallest deviations from a defined target value. Machine capability is the ability of a machine tool to produce parts within the tolerance interval. Capability indices are a statistical way of describing how well a product is machined compared to defined target values and tolerances. Currently, there is no standardized way to acquire a machine capability value. This paper describes how machine capability indices are evaluated in machining centers. After the machining of specimens, straightness, roundness and positioning accuracy were measured using CMM(coordinate measuring machine). These measured values and defined tolerances were used to evaluate the machine capability index. It will be useful for the industry to have standardized ways to choose and calculate machine capability indices.

  20. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Literature and information in vision care and vision science.

    Science.gov (United States)

    Goss, David A

    2008-11-01

    The explosion of information in vision care and vision science makes keeping up with the literature and information in the field challenging. This report examines the nature of literature and information in vision care and vision science. A variety of topics are discussed, including the general nature of scientific and clinical journals, journals in vision science and vision care, resources available for searches for literature and information, and issues involved in the evaluation of journals and other information sources. Aspects of the application of citation analysis to vision care and vision science are reviewed, and a new citation analysis of a leading textbook in vision care (Borish's Clinical Refraction) is presented. This report is directed toward anyone who wants to be more informed about the literature of vision care and vision science, whether they are students, clinicians, educators, or librarians.

  2. A new approach to theoretical investigations of high harmonics generation by means of fs laser interaction with overdense plasma layers. Combining particle-in-cell simulations with machine learning.

    Science.gov (United States)

    Mihailescu, A.

    2016-12-01

    Within the past decade, various experimental and theoretical investigations have been performed in the field of high-order harmonics generation (HHG) by means of femtosecond (fs) laser pulses interacting with laser produced plasmas. Numerous potential future applications thus arise. Beyond achieving higher conversion efficiency for higher harmonic orders and hence harmonic power and brilliance, there are more ambitious scientific goals such as attaining shorter harmonic wavelengths or reducing harmonic pulse durations towards the attosecond and even the zeptosecond range. High order harmonics are also an attractive diagnostic tool for the laser-plasma interaction process itself. Particle-in-Cell (PIC) simulations are known to be one of the most important numerical instruments employed in plasma physics and in laser-plasma interaction investigations. The novelty brought by this paper consists in combining the PIC method with several machine learning approaches. For predictive modelling purposes, a universal functional approximator is used, namely a multi-layer perceptron (MLP), in conjunction with a self-organizing map (SOM). The training sets have been retrieved from the PIC simulations and also from the available literature in the field. The results demonstrate the potential utility of machine learning in predicting optimal interaction scenarios for gaining higher order harmonics or harmonics with particular features such as a particular wavelength range, a particular harmonic pulse duration or a certain intensity. Furthermore, the author will show how machine learning can be used for estimations of electronic temperatures, proving that it can be a reliable tool for obtaining better insights into the fs laser interaction physics.

  3. Identification of Novel Plant Peroxisomal Targeting Signals by a Combination of Machine Learning Methods and in Vivo Subcellular Targeting Analyses[W

    Science.gov (United States)

    Lingner, Thomas; Kataya, Amr R.; Antonicelli, Gerardo E.; Benichou, Aline; Nilssen, Kjersti; Chen, Xiong-Yan; Siemsen, Tanja; Morgenstern, Burkhard; Meinicke, Peter; Reumann, Sigrun

    2011-01-01

    In the postgenomic era, accurate prediction tools are essential for identification of the proteomes of cell organelles. Prediction methods have been developed for peroxisome-targeted proteins in animals and fungi but are missing specifically for plants. For development of a predictor for plant proteins carrying peroxisome targeting signals type 1 (PTS1), we assembled more than 2500 homologous plant sequences, mainly from EST databases. We applied a discriminative machine learning approach to derive two different prediction methods, both of which showed high prediction accuracy and recognized specific targeting-enhancing patterns in the regions upstream of the PTS1 tripeptides. Upon application of these methods to the Arabidopsis thaliana genome, 392 gene models were predicted to be peroxisome targeted. These predictions were extensively tested in vivo, resulting in a high experimental verification rate of Arabidopsis proteins previously not known to be peroxisomal. The prediction methods were able to correctly infer novel PTS1 tripeptides, which even included novel residues. Twenty-three newly predicted PTS1 tripeptides were experimentally confirmed, and a high variability of the plant PTS1 motif was discovered. These prediction methods will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants. PMID:21487095

  4. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    Science.gov (United States)

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  5. Performance characteristics of the AmpliSeq Cancer Hotspot panel v2 in combination with the Ion Torrent Next Generation Sequencing Personal Genome Machine.

    Science.gov (United States)

    Butler, Kimberly S; Young, Megan Y L; Li, Zhihua; Elespuru, Rosalie K; Wood, Steven C

    2016-02-01

    Next-Generation Sequencing is a rapidly advancing technology that has research and clinical applications. For many cancers, it is important to know the precise mutation(s) present, as specific mutations could indicate or contra-indicate certain treatments as well as be indicative of prognosis. Using the Ion Torrent Personal Genome Machine and the AmpliSeq Cancer Hotspot panel v2, we sequenced two pancreatic cancer cell lines, BxPC-3 and HPAF-II, alone or in mixtures, to determine the error rate, sensitivity, and reproducibility of this system. The system resulted in coverage averaging 2000× across the various amplicons and was able to reliably and reproducibly identify mutations present at a rate of 5%. Identification of mutations present at a lower rate was possible by altering the parameters by which calls were made, but with an increase in erroneous, low-level calls. The panel was able to identify known mutations in these cell lines that are present in the COSMIC database. In addition, other, novel mutations were also identified that may prove clinically useful. The system was assessed for systematic errors such as homopolymer effects, end of amplicon effects and patterns in NO CALL sequence. Overall, the system is adequate at identifying the known, targeted mutations in the panel. Published by Elsevier Inc.

  6. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  7. Machine consciousness.

    Science.gov (United States)

    Aleksander, Igor

    2005-01-01

    The work from several laboratories on the modeling of consciousness is reviewed. This ranges, on one hand, from purely functional models where behavior is important and leads to an attribution of consciousness to, on the other hand, material work closely derived from the information about the anatomy of the brain. At the functional end of the spectrum, applications are described specifically directed at a job-finding problem, where the person being served should not discern between being served by a conscious human or a machine. This employs an implementation of global workspace theories. At the material end, attempts at modeling attentional brain mechanisms, and basic biochemical processes in children are discussed. There are also general prescriptions for functional schemas that facilitate discussions for the presence of consciousness in computational systems and axiomatic structures that define necessary architectural features without which it would be difficult to represent sensations. Another distinction between these two approaches is whether one attempts to model phenomenology (material end) or not (functional end). The former is sometimes called "synthetic phenomenology." The upshot of this chapter is that studying consciousness through the design of machines is likely to have two major outcomes. The first is to provide a wide-ranging computational language to express the concept of consciousness. The second is to suggest a wide-ranging set of computational methods for building competent machinery that benefits from the flexibility of conscious representations.

  8. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  9. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  10. Support Spinor Machine

    OpenAIRE

    Kanjamapornkul, Kabin; Pinčák, Richard; Chunithpaisan, Sanphet; Bartoš, Erik

    2017-01-01

    We generalize a support vector machine to a support spinor machine by using the mathematical structure of wedge product over vector machine in order to extend field from vector field to spinor field. The separated hyperplane is extended to Kolmogorov space in time series data which allow us to extend a structure of support vector machine to a support tensor machine and a support tensor machine moduli space. Our performance test on support spinor machine is done over one class classification o...

  11. Translating visions into realities.

    Science.gov (United States)

    Nesje, Arne

    2006-08-01

    The overall vision and the building concept. The overall vision with individual buildings that have focal points for the related medical treatment may seem to increase both investment and operational cost, especially in the period until the total hospital is finished (2014). The slogan "Better services at lower cost" is probably a vision that will prove to be hard to fulfil. But the patients will probably be the long-term winners with single rooms with bathroom, high standards of service, good architecture and a pleasant environment. The challenge will be to get the necessary funding for running the hospital. The planning process and project management Many interviewees indicate how difficult it is to combine many functions and requirements in one building concept. Different architectural, technical, functional and economic interests will often cause conflict. The project organisation HBMN was organised outside the administration of both STOLAV and HMN. A closer connection and better co-operation with STOLAV may have resulted in more influence from the medical employees. It is probably fair to anticipate that the medical employees would have felt more ownership of the process and thus be more satisfied with the concept and the result. On the other hand the organisation of the project outside the hospital administration may have contributed to better control and more professional management of the construction project. The management for planning and building (technical programme, environmental programme, aesthetical The need for control on site was probably underestimated. For STOLAV technical department (TD) the building process has been time-consuming by giving support, making controls and preparing the take-over phase. But during this process they have become better trained to run and operate the new centres. The commissioning phase has been a challenging time. There were generally more changes, supplementation and claims than anticipated. The investment costs

  12. Development of Binocular Vision

    Directory of Open Access Journals (Sweden)

    Muhammad Syauqie

    2014-01-01

    Full Text Available AbstrakPenglihatan binokular secara harfiah berarti penglihatan dengan 2 mata dan dengan adanya penglihatan binokular, kita dapat melihat dunia dalam 3 dimensi meskipun bayangan yang jatuh pada kedua retina merupakan bayangan 2 dimensi. Penglihatan binokular juga memberikan beberapa keuntungan berupa ketajaman visual, kontras sensitivitas, dan lapangan pandang penglihatan yang lebih baik dibandingkan dengan penglihatan monokular. Penglihatan binokular normal memerlukan aksis visual yang jernih, fusi sensoris, dan fusi motoris. Pada manusia, periode sensitif dari perkembangan penglihatan binokular dimulai pada usia sekitar 3 bulan, mencapai puncaknya pada usia 1 hingga 3 tahun, telah berkembang sempurna pada usia 4 tahun dan secara perlahan menurun hingga berhenti pada usia 9 tahun. Berbagai hambatan, berupa hambatan sensoris, motoris,dan sentral, dalam jalur refleks sangat mungkin akan menghambat perkembangan dari penglihatan binokular terutama pada periode sensitif sewaktu 2 tahun pertama kehidupan.Kata kunci: penglihatan binokular, perkembangan, fusi, stereopsisAbstractBinocular vision literally means vision with two eyes and with binocular vision, we can see the world in three dimensions even though the images that fall on both of the retina were the 2-dimensional images. Binocular vision also provide some advantages included improved visual acuity, contrast sensitivity, and visual field compared with monocular vision. Normal binocular vision requires a clear visual axis, sensory fusion, and motoric fusion. In human, the sensitive period of binocular vision development began at around 3 months of age, reaching its peak at the age of 1 to 3 years, had developed completely at the age of 4 years and gradually declined until it stops at the age of 9 years. Various obstacles, such as sensory, motoric, and central obstacles, within the reflex pathway were very likely to inhibited the development of binocular vision, especially in sensitive period

  13. Stereo Vision Inside Tire

    Science.gov (United States)

    2015-08-21

    1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final... Stereo Vision Inside Tire 5a. CONTRACT NUMBER W911NF-14-1-0590 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Prof PS Els CM...on the development of a stereo vision system that can be mounted inside a rolling tire, known as T2-CAM for Tire-Terrain CAMera. The T2-CAM system

  14. Stereo vision and strabismus.

    Science.gov (United States)

    Read, J C A

    2015-02-01

    Binocular stereopsis, or stereo vision, is the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes. It depends on both sensory and motor abilities. In this review, I briefly outline some of the neuronal mechanisms supporting stereo vision, and discuss how these are disrupted in strabismus. I explain, in some detail, current methods of assessing stereo vision and their pros and cons. Finally, I review the evidence supporting the clinical importance of such measurements.

  15. Anchoring visions in organizations

    DEFF Research Database (Denmark)

    Simonsen, Jesper

    1999-01-01

    as by those involved in the actual implementation. A model depicting a recent trend within systems development is presented: Organizations rely on purchasing generic software products and/or software development outsourced to external contractors. A contemporary method for participatory design, where......This paper introduces the term 'anchoring' within systems development: Visions, developed through early systems design within an organization, need to be deeply rooted in the organization. A vision's rationale needs to be understood by those who decide if the vision should be implemented as well...

  16. Machine Learning for Medical Imaging

    Science.gov (United States)

    Erickson, Bradley J.; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L.

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. PMID:28212054

  17. Machine Learning for Medical Imaging.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. © RSNA, 2017.

  18. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  19. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine.

    Science.gov (United States)

    Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng

    2014-12-30

    This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  20. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Chuncai Xiao

    2014-12-01

    Full Text Available This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM and improved particle swarm optimization (IPSO algorithm (SVM-IPSO. In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN, the basic particle swarm optimization (PSO method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  1. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  2. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  3. Artificial Vision: Vision of a Newcomer

    Science.gov (United States)

    Fujikado, Takashi; Sawai, Hajime; Tano, Yasuo

    The Japanese Consortium for an Artificial Retina has developed a new stimulating method named Suprachoroidal-Transretinal Stimulation (STS). Using STS, electrically evoked potentials (EEPs) were effectively elicited in Royal College of Surgeons (RCS) rats and in rabbits and cats with normal vision, using relatively small stimulus currents, such that the spatial resolution appeared to be adequate for a visual prosthesis. The histological analysis showed no damage to the rabbit retina when electrical currents sufficient to elicit distinct EEPs were applied. It was also shown that transcorneal electrical stimulation (TES) to the retina prevented the death of retinal ganglion cells (RGCs). STS, which is less invasive than other retinal prostheses, could be one choice to achieve artificial vision, and the optimal parameters of electrical stimulation may also be effective for the neuroprotection of residual RGCs.

  4. Magnetic equivalent circuit model for unipolar hybrid excitation synchronous machine

    OpenAIRE

    Kupiec Emil; Przyborowski Włodzimierz

    2015-01-01

    Lately, there has been increased interest in hybrid excitation electrical machines. Hybrid excitation is a construction that combines permanent magnet excitation with wound field excitation. Within the general classification, these machines can be classified as modified synchronous machines or inductor machines. These machines may be applied as motors and generators. The complexity of electromagnetic phenomena which occur as a result of coupling of magnetic fluxes of separate excitation syste...

  5. delta-vision

    Data.gov (United States)

    California Department of Resources — Delta Vision is intended to identify a strategy for managing the Sacramento-San Joaquin Delta as a sustainable ecosystem that would continue to support environmental...

  6. Computer Vision Syndrome.

    Science.gov (United States)

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  7. Ohio's Comprehensive Vision Project

    Science.gov (United States)

    Bunner, Richard T.

    1973-01-01

    A vision screening program in seven Ohio counties tested 3,261 preschool children and 44,885 school age children for problems of distance visual acuity, muscle balance, and observable eye problems. (DB)

  8. What Is Low Vision?

    Science.gov (United States)

    ... Everyday Living Roadmap to Living with Vision Loss Essential Skills Helpful Products and Technology Home Modification Recreation ... tasks easier, such as clocks with larger numbers, writing guides, or black and white cutting boards. Low ...

  9. Home vision tests

    Science.gov (United States)

    ... or eye disease and you should have a professional eye examination. Amsler grid test: If the grid appears distorted or broken, there may be a problem with the retina . Distance vision test: If you do not read the ...

  10. Synthetic Vision Systems

    Science.gov (United States)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  11. Leadership and vision

    OpenAIRE

    Rogers, Anita; Reynolds, Jill

    2003-01-01

    �Leadership and vision' is the subject of Chapter 3 and Rogers and Reynolds look at how managers can encourage leadership from other people, whether in their team, the organisation or in collaborative work with different agencies. They explore leadership style, and the extent to which managers can and should adapt their personal style to the differing needs of situations and people. Frontline managers may not always feel that they have much opportunity to influence the grander vision and st...

  12. Experiencing space without vision

    OpenAIRE

    Evyapan, Naz A. G. Z.

    1997-01-01

    Ankara : Bilkent Univ., Department of Interior Architecture and Environmental Design and Institute of Fine Arts, 1997. Thesis (Master's) -- Bilkent University, 1997. Includes bibliographical references. In this study, the human body without vision, and its relation with the surrounding space, is examined. Towards this end, firstly space and the human body are briefly discussed. the sense modalities apart from vision, and the development of spatial cognition for the blind and visually...

  13. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  14. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  15. Vision and hearing in old age.

    Science.gov (United States)

    Bergman, B; Rosenhall, U

    2001-01-01

    The concomitant occurrence of hearing and visual impairment was investigated as part of an epidemiological longitudinal study of elderly people. An age cohort. originally consisting of 973 elderly people, was examined with visual and hearing tests three times at ages 70, 81-82 and 88. The best-corrected visual acuity was assessed. The hearing was measured by pure-tone audiometry and whispered and spoken voice (WSV). At age 70 there was no co-existence of visual and hearing impairments, and about 70% had normal vision and hearing. At 81-82 years 3-6% (WSV and audiometry. respectively) had low vision (VA hearing loss, and more than one-tenth had normal vision and hearing. At 88 years 8-13% had low vision and moderate to severe hearing loss, and none of the men and less than one-tenth of the women had normal vision and hearing. At age 88 three times as many women as men had the combination of low vision and normal hearing. Normal vision with the combination of moderate to severe hearing loss was more often found in 88-year-old men. Mild impairments of the two senses were found in 0.5% at age 70 in 22%, 11% (WSV, audiometry) at age 81-82 and in 23%, 9% at age 88 years. At age 70 there was a statistical correlation between visual acuity and hearing measured with pure-tone audiometry in the male group. Those men with better hearing had slightly better visual capacity than those with hearing loss. No correlations were found for women at age 70 nor for women and men at ages 81-82 and 88. Ophthalmologists and audiology physicians should cooperate closely in the rehabilitation process to reduce disability and improve function and wellbeing among the oldest old.

  16. Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine

    National Research Council Canada - National Science Library

    Chen Chen; Wei Li; Hongjun Su; Kui Liu

    2014-01-01

      Extreme learning machine (ELM) is a single-layer feedforward neural network based classifier that has attracted significant attention in computer vision and pattern recognition due to its fast learning speed and strong generalization...

  17. Online Dynamic Parameter Estimation of Synchronous Machines

    Science.gov (United States)

    West, Michael R.

    Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.

  18. Addiction Machines

    Directory of Open Access Journals (Sweden)

    James Godley

    2011-10-01

    Full Text Available Entry into the crypt William Burroughs shared with his mother opened and shut around a failed re-enactment of William Tell’s shot through the prop placed upon a loved one’s head. The accidental killing of his wife Joan completed the installation of the addictation machine that spun melancholia as manic dissemination. An early encryptment to which was added the audio portion of abuse deposited an undeliverable message in WB. Wil- liam could never tell, although his corpus bears the in- scription of this impossibility as another form of pos- sibility. James Godley is currently a doctoral candidate in Eng- lish at SUNY Buffalo, where he studies psychoanalysis, Continental philosophy, and nineteenth-century litera- ture and poetry (British and American. His work on the concept of mourning and “the dead” in Freudian and Lacanian approaches to psychoanalytic thought and in Gothic literature has also spawned an essay on zombie porn. Since entering the Academy of Fine Arts Karlsruhe in 2007, Valentin Hennig has studied in the classes of Sil- via Bächli, Claudio Moser, and Corinne Wasmuht. In 2010 he spent a semester at the Dresden Academy of Fine Arts. His work has been shown in group exhibi- tions in Freiburg and Karlsruhe.

  19. Monitoring Electron Beam Freeform Fabrication by Active Machine Vision Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Additive manufacturing is a modern fabrication process by which three dimensional components are built up layer-by-layer. Each layer corresponds to a cross-section...

  20. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  1. Machine Vision and Advanced Image Processing in Remote Sensing

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    This paper describes the multivariate alteration detection (MAD) transformation which is based on the established canonical correlation analysis. It also proposes post-processing of the change detected by the MAD variates by means of maximum autocorrelation factor (MAF) analysis. As opposed to mo...

  2. Beef identification in industrial slaughterhouses using machine vision techniques

    Directory of Open Access Journals (Sweden)

    J. F. Velez

    2013-10-01

    Full Text Available Accurate individual animal identification provides the producers with useful information to take management decisions about an individual animal or about the complete herd. This identification task is also important to ensure the integrity of the food chain. Consequently, many consumers are turning their attention to issues of quality in animal food production methods. This work describes an implemented solution for individual beef identification, taking in the time from cattle shipment arrival at the slaughterhouse until the animals are slaughtered and cut up. Our beef identification approach is image-based and the pursued goals are the correct automatic extraction and matching between some numeric information extracted from the beef ear-tag and the corresponding one from the Bovine Identification Document (BID. The achieved correct identification results by our method are near 90%, by considering the practical working conditions of slaughterhouses (i.e. problems with dirt and bad illumination conditions. Moreover, the presence of multiple machinery in industrial slaughterhouses make it difficult the use of Radio Frequency Identification (RFID beef tags due to the high risks of interferences between RFID and the other technologies in the workplace. The solution presented is hardware/software since it includes a specialized hardware system that was also developed. Our approach considers the current EU legislation for beef traceability and it reduces the economic cost of individual beef identification with respect to RFID transponders. The system implemented has been in use satisfactorily for more than three years in one of the largest industrial slaughterhouses in Spain.

  3. A Multisensor Machine Vision System for Hardwood Defect Detection

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Over the next decade there is going to be a substantial change in the way forest products manufacturing industries do business. The economic forces responsible for these changes include the heightened economic competition that will result from the new world economy and the continued increase in the cost of both raw material and labor. These factors are going to force...

  4. Aphid Identification and Counting Based on Smartphone and Machine Vision

    Directory of Open Access Journals (Sweden)

    Suo Xuesong

    2017-01-01

    Full Text Available Exact enumeration of aphids before the aphids outbreak can provide basis for precision spray. This paper designs counting software that can be run on smartphones for real-time enumeration of aphids. As a first step of the method used in this paper, images of the yellow sticky board that is aiming to catch insects are segmented from complex background by using GrabCut method; then the images will be normalized by perspective transformation method. The second step is the pretreatment on the images; images of aphids will be segmented by using OSTU threshold method after the effect of random illumination is eliminated by single image difference method. The last step of the method is aphids’ recognition and counting according to area feature of aphids after extracting contours of aphids by contour detection method. At last, the result of the experiment proves that the effect of random illumination can be effectively eliminated by using single image difference method. The counting accuracy in greenhouse is above 95%, while it can reach 92.5% outside. Thus, it can be seen that the counting software designed in this paper can realize exact enumeration of aphids under complicated illumination which can be used widely. The design method proposed in this paper can provide basis for precision spray according to its effective detection insects.

  5. Real Time Intelligent Target Detection and Analysis with Machine Vision

    Science.gov (United States)

    Howard, Ayanna; Padgett, Curtis; Brown, Kenneth

    2000-01-01

    We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.

  6. Remote sensing of physiological signs using a machine vision system.

    Science.gov (United States)

    Al-Naji, Ali; Gibson, Kim; Chahl, Javaan

    2017-07-01

    The aim of this work is to remotely measure heart rate (HR) and respiratory rate (RR) using a video camera from long range (> 50 m). The proposed system is based on imperceptible signals produced from blood circulation, including skin colour variations and head motion. As these signals are not visible to the naked eye and to preserve the signal strength in the video, we used an improved video magnification technique to enhance these invisible signals and detect the physiological activity within the subject. The software of the proposed system was built in a graphic user interface (GUI) environment to easily select a magnification system to use (colour or motion magnification) and measure the physiological signs independently. The measurements were performed on a set of 10 healthy subjects equipped with a finger pulse oximeter and respiratory belt transducer that were used as reference methods. The experimental results were statistically analysed by using the Bland-Altman method, Pearson's correlation coefficient, Spearman correlation coefficient, mean absolute error, and root mean squared error. The proposed system achieved high correlation even in the presence of movement artefacts, different skin tones, lighting conditions and distance from the camera. With acceptable performance and low computational complexity, the proposed system is a suitable candidate for homecare applications, security applications and mobile health devices.

  7. Machine vision algorithms applied to dynamic traffic light control

    Directory of Open Access Journals (Sweden)

    Fabio Andrés Espinosa Valcárcel

    2013-01-01

    número de autos presentes en imágenes capturadas por un conjunto de cámaras estratégicamente ubicadas en cada intersección. Usando esta información, el sistema selecciona la secuencia de acciones que optimicen el flujo vehicular dentro de la zona de control, en un escenario simulado. Los resultados obtenidos muestran que el sistema disminuye en un 20% los tiempos de retraso para cada vehículo y que además es capaz de adaptarse rápida y eficientemente a los cambios de flujo.

  8. IDA's Energy Vision 2050

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Lund, Henrik; Hansen, Kenneth

    IDA’s Energy Vision 2050 provides a Smart Energy System strategy for a 100% renewable Denmark in 2050. The vision presented should not be regarded as the only option in 2050 but as one scenario out of several possibilities. With this vision the Danish Society of Engineers, IDA, presents its third...... contribution for an energy strategy for Denmark. The IDA’s Energy Plan 2030 was prepared in 2006 and IDA’s Climate Plan was prepared in 2009. IDA’s Energy Vision 2050 is developed for IDA by representatives from The Society of Engineers and by a group of researchers at Aalborg University. It is based on state......-of-the-art knowledge about how low cost energy systems can be designed while also focusing on long-term resource efficiency. The Energy Vision 2050 has the ambition to focus on all parts of the energy system rather than single technologies, but to have an approach in which all sectors are integrated. While Denmark...

  9. Colour, vision and ergonomics.

    Science.gov (United States)

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  10. Experimental force modeling for deformation machining stretching ...

    Indian Academy of Sciences (India)

    Deformation machining is a hybrid process that combines two manufacturing processes—thin structure machining and single-point incremental forming. This process enables the creation of complex structures and geometries, which would be rather difficult or sometimes impossible to manufacture. A comprehensive ...

  11. Differential and Combined Effects of Physical Activity Profiles and Prohealth Behaviors on Diabetes Prevalence among Blacks and Whites in the US Population: A Novel Bayesian Belief Network Machine Learning Analysis

    Directory of Open Access Journals (Sweden)

    Azizi A. Seixas

    2017-01-01

    Full Text Available The current study assessed the prevalence of diabetes across four different physical activity lifestyles and infer through machine learning which combinations of physical activity, sleep, stress, and body mass index yield the lowest prevalence of diabetes in Blacks and Whites. Data were extracted from the National Health Interview Survey (NHIS dataset from 2004–2013 containing demographics, chronic diseases, and sleep duration (N = 288,888. Of the total sample, 9.34% reported diabetes (where the prevalence of diabetes was 12.92% in Blacks/African Americans and 8.68% in Whites. Over half of the sample reported sedentary lifestyles (Blacks were more sedentary than Whites, approximately 20% reported moderately active lifestyles (Whites more than Blacks, approximately 15% reported active lifestyles (Whites more than Blacks, and approximately 6% reported very active lifestyles (Whites more than Blacks. Across four different physical activity lifestyles, Blacks consistently had a higher diabetes prevalence compared to their White counterparts. Physical activity combined with healthy sleep, low stress, and average body weight reduced the prevalence of diabetes, especially in Blacks. Our study highlights the need to provide alternative and personalized behavioral/lifestyle recommendations to generic national physical activity recommendations, specifically among Blacks, to reduce diabetes and narrow diabetes disparities between Blacks and Whites.

  12. Stochastic scheduling on unrelated machines

    NARCIS (Netherlands)

    Skutella, Martin; Sviridenko, Maxim; Uetz, Marc Jochen; Mayr, Ernst W.; Portier, Natacha

    2014-01-01

    Two important characteristics encountered in many real-world scheduling problems are heterogeneous processors and a certain degree of uncertainty about the sizes of jobs. In this paper we address both, and study for the first time a scheduling problem that combines the classical unrelated machine

  13. The machine in multimedia analytics

    NARCIS (Netherlands)

    Zahálka, J.

    2017-01-01

    This thesis investigates the role of the machine in multimedia analytics, a discipline that combines visual analytics with multimedia analysis algorithms in order to unlock the potential of multimedia collections as sources of knowledge in scientific and applied domains. Specifically, the central

  14. Periodontium bestows vision!!

    Directory of Open Access Journals (Sweden)

    Minkle Gulati

    2016-01-01

    Full Text Available The role of periodontium in supporting the tooth structures is well-known. However, less is known about its contribution to the field of ophthalmology. Corneal diseases are among major causes of blindness affecting millions of people worldwide, for which synthetic keratoprosthesis was considered the last resort to restore vision. Yet, these synthetic keratoprosthesis suffered from serious limitations, especially the foreign body reactions invoked by them resulting in extrusion of the whole prosthesis from the eye. To overcome these shortcomings, an autologous osteo-odonto keratoprosthesis utilizing intraoral entities was introduced that could positively restore vision even in cases of severely damaged eyes. The successful functioning of this prosthesis, however, predominantly depended on the presence of a healthy periodontium for grafting. Therefore, the following short communication aims to acknowledge this lesser-known role of the periodontium and other oral structures in bestowing vision to the blind patients.

  15. Overview of sports vision

    Science.gov (United States)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  16. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  17. Binocular vision in glaucoma.

    Science.gov (United States)

    Reche-Sainz, J A; Gómez de Liaño, R; Toledano-Fernández, N; García-Sánchez, J

    2013-05-01

    To describe the possible impairment of binocular vision in primary open angle glaucoma (POAG) patients. A cross-sectional study was conducted on 58 glaucoma patients, 76 ocular hypertensives and 82 normal subjects. They were examined with a battery of binocular tests consisting of the measurement of phoria angles, amplitudes of fusion (AF), near point of convergence (NPC) assessment, an evaluation of suppression (Worth test), stereoacuity according to Titmus, and TNO tests. The patients with glaucoma showed significantly increased phoria angles, especially in near vision, compared with the ocular hypertensives and controls (P=.000). AF were reduced mainly in near distances compared to hypertensives and controls (P=.000). The NPC of glaucoma was higher than the other two groups (P=.000). No differences were found in the near-distance suppression test between the three groups (P=.682), but there were differences in the distance vision of patients with glaucoma compared to hypertensives (OR=3.867, 95% CI; 1.260-11.862; P=.008) and controls (OR= 5.831, 95% CI; 2.229-15.252; P=.000). The stereoacuity of patients with glaucoma was reduced in both tests (P=.001). POAG is mostly associated with, an increased exophoria in near vision, a decreased AF in near vision, a far-distance NPC, central suppression in far-vision, and a loss of stereoacuity. These changes do not seem to appear early as they were not observed in hypertensive patients versus controls. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier España, S.L. All rights reserved.

  18. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  19. Laser machining of advanced materials

    CERN Document Server

    Dahotre, Narendra B

    2011-01-01

    Advanced materialsIntroductionApplicationsStructural ceramicsBiomaterials CompositesIntermetallicsMachining of advanced materials IntroductionFabrication techniquesMechanical machiningChemical Machining (CM)Electrical machiningRadiation machining Hybrid machiningLaser machiningIntroductionAbsorption of laser energy and multiple reflectionsThermal effectsLaser machining of structural ceramicsIntrodu

  20. Viscoelastic machine elements elastomers and lubricants in machine systems

    CERN Document Server

    MOORE, D F

    2015-01-01

    Viscoelastic Machine Elements, which encompass elastomeric elements (rubber-like components), fluidic elements (lubricating squeeze films) and their combinations, are used for absorbing vibration, reducing friction and improving energy use. Examplesinclude pneumatic tyres, oil and lip seals, compliant bearings and races, and thin films. This book sets out to show that these elements can be incorporated in machine analysis, just as in the case of conventional elements (e.g. gears, cogs, chaindrives, bearings). This is achieved by introducing elementary theory and models, by describing new an